00:00:00.000 Started by upstream project "autotest-per-patch" build number 127086 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.091 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.092 The recommended git tool is: git 00:00:00.092 using credential 00000000-0000-0000-0000-000000000002 00:00:00.095 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.132 Fetching changes from the remote Git repository 00:00:00.133 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.178 Using shallow fetch with depth 1 00:00:00.178 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.178 > git --version # timeout=10 00:00:00.212 > git --version # 'git version 2.39.2' 00:00:00.212 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.237 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.237 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.755 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.768 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.782 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:04.782 > git config core.sparsecheckout # timeout=10 00:00:04.807 > git read-tree -mu HEAD # timeout=10 00:00:04.825 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:04.865 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:04.866 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:04.955 [Pipeline] Start of Pipeline 00:00:04.968 [Pipeline] library 00:00:04.969 Loading library shm_lib@master 00:00:07.456 Library shm_lib@master is cached. Copying from home. 00:00:07.483 [Pipeline] node 00:00:07.595 Running on GP2 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.598 [Pipeline] { 00:00:07.607 [Pipeline] catchError 00:00:07.609 [Pipeline] { 00:00:07.622 [Pipeline] wrap 00:00:07.630 [Pipeline] { 00:00:07.638 [Pipeline] stage 00:00:07.640 [Pipeline] { (Prologue) 00:00:07.823 [Pipeline] sh 00:00:08.099 + logger -p user.info -t JENKINS-CI 00:00:08.125 [Pipeline] echo 00:00:08.127 Node: GP2 00:00:08.133 [Pipeline] sh 00:00:08.425 [Pipeline] setCustomBuildProperty 00:00:08.436 [Pipeline] echo 00:00:08.437 Cleanup processes 00:00:08.441 [Pipeline] sh 00:00:08.716 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.716 2386379 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.729 [Pipeline] sh 00:00:09.010 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.010 ++ grep -v 'sudo pgrep' 00:00:09.010 ++ awk '{print $1}' 00:00:09.010 + sudo kill -9 00:00:09.010 + true 00:00:09.023 [Pipeline] cleanWs 00:00:09.031 [WS-CLEANUP] Deleting project workspace... 00:00:09.031 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.038 [WS-CLEANUP] done 00:00:09.042 [Pipeline] setCustomBuildProperty 00:00:09.055 [Pipeline] sh 00:00:09.333 + sudo git config --global --replace-all safe.directory '*' 00:00:09.417 [Pipeline] httpRequest 00:00:09.435 [Pipeline] echo 00:00:09.436 Sorcerer 10.211.164.101 is alive 00:00:09.444 [Pipeline] httpRequest 00:00:09.448 HttpMethod: GET 00:00:09.448 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.449 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.451 Response Code: HTTP/1.1 200 OK 00:00:09.451 Success: Status code 200 is in the accepted range: 200,404 00:00:09.452 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:10.790 [Pipeline] sh 00:00:11.094 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:11.112 [Pipeline] httpRequest 00:00:11.137 [Pipeline] echo 00:00:11.139 Sorcerer 10.211.164.101 is alive 00:00:11.150 [Pipeline] httpRequest 00:00:11.155 HttpMethod: GET 00:00:11.156 URL: http://10.211.164.101/packages/spdk_ee633e585d2c320de0bfc447bad1c5870f750b2f.tar.gz 00:00:11.156 Sending request to url: http://10.211.164.101/packages/spdk_ee633e585d2c320de0bfc447bad1c5870f750b2f.tar.gz 00:00:11.166 Response Code: HTTP/1.1 200 OK 00:00:11.166 Success: Status code 200 is in the accepted range: 200,404 00:00:11.167 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_ee633e585d2c320de0bfc447bad1c5870f750b2f.tar.gz 00:00:35.324 [Pipeline] sh 00:00:35.615 + tar --no-same-owner -xf spdk_ee633e585d2c320de0bfc447bad1c5870f750b2f.tar.gz 00:00:38.934 [Pipeline] sh 00:00:39.217 + git -C spdk log --oneline -n5 00:00:39.217 ee633e585 rpc.py: access bdev rpcs directly from rpc module 00:00:39.217 6f18624d4 python/rpc: Python rpc call generator. 00:00:39.217 da8d49b2f python/rpc: Replace bdev.py with generated rpc's 00:00:39.217 8711e7e9b autotest: reduce accel tests runs with SPDK_TEST_ACCEL flag 00:00:39.217 50222f810 configure: don't exit on non Intel platforms 00:00:39.229 [Pipeline] } 00:00:39.248 [Pipeline] // stage 00:00:39.259 [Pipeline] stage 00:00:39.261 [Pipeline] { (Prepare) 00:00:39.280 [Pipeline] writeFile 00:00:39.298 [Pipeline] sh 00:00:39.581 + logger -p user.info -t JENKINS-CI 00:00:39.594 [Pipeline] sh 00:00:39.878 + logger -p user.info -t JENKINS-CI 00:00:39.891 [Pipeline] sh 00:00:40.175 + cat autorun-spdk.conf 00:00:40.175 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.175 SPDK_TEST_NVMF=1 00:00:40.175 SPDK_TEST_NVME_CLI=1 00:00:40.175 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:40.175 SPDK_TEST_NVMF_NICS=e810 00:00:40.175 SPDK_TEST_VFIOUSER=1 00:00:40.175 SPDK_RUN_UBSAN=1 00:00:40.175 NET_TYPE=phy 00:00:40.183 RUN_NIGHTLY=0 00:00:40.188 [Pipeline] readFile 00:00:40.214 [Pipeline] withEnv 00:00:40.216 [Pipeline] { 00:00:40.231 [Pipeline] sh 00:00:40.515 + set -ex 00:00:40.515 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:40.515 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:40.515 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.515 ++ SPDK_TEST_NVMF=1 00:00:40.515 ++ SPDK_TEST_NVME_CLI=1 00:00:40.515 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:40.515 ++ SPDK_TEST_NVMF_NICS=e810 00:00:40.515 ++ SPDK_TEST_VFIOUSER=1 00:00:40.515 ++ SPDK_RUN_UBSAN=1 00:00:40.515 ++ NET_TYPE=phy 00:00:40.515 ++ RUN_NIGHTLY=0 00:00:40.515 + case $SPDK_TEST_NVMF_NICS in 00:00:40.515 + DRIVERS=ice 00:00:40.515 + [[ tcp == \r\d\m\a ]] 00:00:40.515 + [[ -n ice ]] 00:00:40.515 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:40.515 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:40.515 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:40.515 rmmod: ERROR: Module irdma is not currently loaded 00:00:40.515 rmmod: ERROR: Module i40iw is not currently loaded 00:00:40.515 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:40.515 + true 00:00:40.515 + for D in $DRIVERS 00:00:40.515 + sudo modprobe ice 00:00:40.515 + exit 0 00:00:40.524 [Pipeline] } 00:00:40.543 [Pipeline] // withEnv 00:00:40.549 [Pipeline] } 00:00:40.626 [Pipeline] // stage 00:00:40.633 [Pipeline] catchError 00:00:40.634 [Pipeline] { 00:00:40.642 [Pipeline] timeout 00:00:40.642 Timeout set to expire in 50 min 00:00:40.643 [Pipeline] { 00:00:40.652 [Pipeline] stage 00:00:40.653 [Pipeline] { (Tests) 00:00:40.663 [Pipeline] sh 00:00:40.940 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:40.940 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:40.940 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:40.940 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:40.940 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:40.940 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:40.940 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:40.940 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:40.940 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:40.940 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:40.940 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:40.940 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:40.940 + source /etc/os-release 00:00:40.940 ++ NAME='Fedora Linux' 00:00:40.940 ++ VERSION='38 (Cloud Edition)' 00:00:40.940 ++ ID=fedora 00:00:40.940 ++ VERSION_ID=38 00:00:40.940 ++ VERSION_CODENAME= 00:00:40.940 ++ PLATFORM_ID=platform:f38 00:00:40.940 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:40.940 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:40.940 ++ LOGO=fedora-logo-icon 00:00:40.940 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:40.940 ++ HOME_URL=https://fedoraproject.org/ 00:00:40.940 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:40.940 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:40.940 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:40.940 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:40.940 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:40.940 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:40.940 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:40.940 ++ SUPPORT_END=2024-05-14 00:00:40.940 ++ VARIANT='Cloud Edition' 00:00:40.940 ++ VARIANT_ID=cloud 00:00:40.940 + uname -a 00:00:40.940 Linux spdk-gp-02 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:40.940 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:41.875 Hugepages 00:00:41.875 node hugesize free / total 00:00:41.875 node0 1048576kB 0 / 0 00:00:41.875 node0 2048kB 0 / 0 00:00:41.875 node1 1048576kB 0 / 0 00:00:41.875 node1 2048kB 0 / 0 00:00:41.875 00:00:41.875 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:41.875 I/OAT 0000:00:04.0 8086 3c20 0 ioatdma - - 00:00:41.875 I/OAT 0000:00:04.1 8086 3c21 0 ioatdma - - 00:00:41.875 I/OAT 0000:00:04.2 8086 3c22 0 ioatdma - - 00:00:41.875 I/OAT 0000:00:04.3 8086 3c23 0 ioatdma - - 00:00:41.875 I/OAT 0000:00:04.4 8086 3c24 0 ioatdma - - 00:00:41.875 I/OAT 0000:00:04.5 8086 3c25 0 ioatdma - - 00:00:41.875 I/OAT 0000:00:04.6 8086 3c26 0 ioatdma - - 00:00:41.875 I/OAT 0000:00:04.7 8086 3c27 0 ioatdma - - 00:00:41.875 I/OAT 0000:80:04.0 8086 3c20 1 ioatdma - - 00:00:41.875 I/OAT 0000:80:04.1 8086 3c21 1 ioatdma - - 00:00:41.875 I/OAT 0000:80:04.2 8086 3c22 1 ioatdma - - 00:00:41.875 I/OAT 0000:80:04.3 8086 3c23 1 ioatdma - - 00:00:41.875 I/OAT 0000:80:04.4 8086 3c24 1 ioatdma - - 00:00:41.875 I/OAT 0000:80:04.5 8086 3c25 1 ioatdma - - 00:00:41.875 I/OAT 0000:80:04.6 8086 3c26 1 ioatdma - - 00:00:41.875 I/OAT 0000:80:04.7 8086 3c27 1 ioatdma - - 00:00:41.875 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:41.875 + rm -f /tmp/spdk-ld-path 00:00:41.875 + source autorun-spdk.conf 00:00:41.875 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.875 ++ SPDK_TEST_NVMF=1 00:00:41.875 ++ SPDK_TEST_NVME_CLI=1 00:00:41.875 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:41.875 ++ SPDK_TEST_NVMF_NICS=e810 00:00:41.875 ++ SPDK_TEST_VFIOUSER=1 00:00:41.875 ++ SPDK_RUN_UBSAN=1 00:00:41.875 ++ NET_TYPE=phy 00:00:41.875 ++ RUN_NIGHTLY=0 00:00:41.875 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:41.875 + [[ -n '' ]] 00:00:41.875 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:41.875 + for M in /var/spdk/build-*-manifest.txt 00:00:41.875 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:41.875 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:41.875 + for M in /var/spdk/build-*-manifest.txt 00:00:41.875 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:41.875 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:41.875 ++ uname 00:00:41.875 + [[ Linux == \L\i\n\u\x ]] 00:00:41.875 + sudo dmesg -T 00:00:41.875 + sudo dmesg --clear 00:00:41.875 + dmesg_pid=2386947 00:00:41.875 + [[ Fedora Linux == FreeBSD ]] 00:00:41.875 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:41.875 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:41.875 + sudo dmesg -Tw 00:00:41.875 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:41.875 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:41.875 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:41.875 + [[ -x /usr/src/fio-static/fio ]] 00:00:41.875 + export FIO_BIN=/usr/src/fio-static/fio 00:00:41.875 + FIO_BIN=/usr/src/fio-static/fio 00:00:41.875 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:41.875 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:41.875 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:41.875 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:41.875 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:41.875 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:41.875 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:41.875 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:41.875 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:41.875 Test configuration: 00:00:41.875 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.875 SPDK_TEST_NVMF=1 00:00:41.875 SPDK_TEST_NVME_CLI=1 00:00:41.875 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:41.875 SPDK_TEST_NVMF_NICS=e810 00:00:41.875 SPDK_TEST_VFIOUSER=1 00:00:41.875 SPDK_RUN_UBSAN=1 00:00:41.875 NET_TYPE=phy 00:00:41.875 RUN_NIGHTLY=0 18:57:47 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:41.875 18:57:47 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:41.875 18:57:47 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:41.875 18:57:47 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:41.875 18:57:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.876 18:57:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.876 18:57:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.876 18:57:47 -- paths/export.sh@5 -- $ export PATH 00:00:41.876 18:57:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.876 18:57:47 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:41.876 18:57:47 -- common/autobuild_common.sh@447 -- $ date +%s 00:00:41.876 18:57:47 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721840267.XXXXXX 00:00:41.876 18:57:47 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721840267.yj6uLk 00:00:41.876 18:57:47 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:00:41.876 18:57:47 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:00:41.876 18:57:47 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:41.876 18:57:47 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:41.876 18:57:47 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:41.876 18:57:47 -- common/autobuild_common.sh@463 -- $ get_config_params 00:00:41.876 18:57:47 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:00:41.876 18:57:47 -- common/autotest_common.sh@10 -- $ set +x 00:00:41.876 18:57:47 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:41.876 18:57:47 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:00:41.876 18:57:47 -- pm/common@17 -- $ local monitor 00:00:41.876 18:57:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:41.876 18:57:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:41.876 18:57:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:41.876 18:57:47 -- pm/common@21 -- $ date +%s 00:00:41.876 18:57:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:41.876 18:57:47 -- pm/common@21 -- $ date +%s 00:00:41.876 18:57:47 -- pm/common@25 -- $ sleep 1 00:00:41.876 18:57:47 -- pm/common@21 -- $ date +%s 00:00:41.876 18:57:47 -- pm/common@21 -- $ date +%s 00:00:41.876 18:57:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721840267 00:00:41.876 18:57:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721840267 00:00:41.876 18:57:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721840267 00:00:41.876 18:57:47 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721840267 00:00:42.136 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721840267_collect-vmstat.pm.log 00:00:42.136 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721840267_collect-cpu-load.pm.log 00:00:42.136 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721840267_collect-cpu-temp.pm.log 00:00:42.136 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721840267_collect-bmc-pm.bmc.pm.log 00:00:42.136 Traceback (most recent call last): 00:00:42.136 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py", line 24, in 00:00:42.136 import spdk.rpc as rpc # noqa 00:00:42.136 ^^^^^^^^^^^^^^^^^^^^^^ 00:00:42.136 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python/spdk/rpc/__init__.py", line 13, in 00:00:42.136 from . import bdev 00:00:42.136 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python/spdk/rpc/bdev.py", line 8, in 00:00:42.136 from spdk.rpc.rpc import * 00:00:42.136 ModuleNotFoundError: No module named 'spdk.rpc.rpc' 00:00:43.073 18:57:48 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:00:43.073 18:57:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:43.073 18:57:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:43.073 18:57:48 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:43.073 18:57:48 -- spdk/autobuild.sh@16 -- $ date -u 00:00:43.073 Wed Jul 24 04:57:48 PM UTC 2024 00:00:43.073 18:57:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:43.073 v24.09-pre-314-gee633e585 00:00:43.073 18:57:48 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:43.073 18:57:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:43.073 18:57:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:43.073 18:57:48 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:43.073 18:57:48 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:43.073 18:57:48 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.073 ************************************ 00:00:43.073 START TEST ubsan 00:00:43.073 ************************************ 00:00:43.073 18:57:48 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:43.073 using ubsan 00:00:43.073 00:00:43.073 real 0m0.001s 00:00:43.073 user 0m0.000s 00:00:43.073 sys 0m0.001s 00:00:43.073 18:57:48 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:43.073 18:57:48 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:43.073 ************************************ 00:00:43.073 END TEST ubsan 00:00:43.073 ************************************ 00:00:43.073 18:57:48 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:43.073 18:57:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:43.073 18:57:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:43.073 18:57:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:43.073 18:57:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:43.073 18:57:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:43.073 18:57:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:43.073 18:57:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:43.073 18:57:48 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:43.073 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:43.073 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:43.334 Using 'verbs' RDMA provider 00:00:54.252 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:06.461 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:06.461 Creating mk/config.mk...done. 00:01:06.461 Creating mk/cc.flags.mk...done. 00:01:06.461 Type 'make' to build. 00:01:06.461 18:58:10 -- spdk/autobuild.sh@69 -- $ run_test make make -j32 00:01:06.461 18:58:10 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:06.461 18:58:10 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:06.461 18:58:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.461 ************************************ 00:01:06.461 START TEST make 00:01:06.461 ************************************ 00:01:06.461 18:58:10 make -- common/autotest_common.sh@1125 -- $ make -j32 00:01:06.461 The Meson build system 00:01:06.461 Version: 1.3.1 00:01:06.461 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:06.461 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:06.461 Build type: native build 00:01:06.461 Project name: libvfio-user 00:01:06.461 Project version: 0.0.1 00:01:06.461 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:06.461 C linker for the host machine: cc ld.bfd 2.39-16 00:01:06.461 Host machine cpu family: x86_64 00:01:06.461 Host machine cpu: x86_64 00:01:06.461 Run-time dependency threads found: YES 00:01:06.461 Library dl found: YES 00:01:06.461 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:06.461 Run-time dependency json-c found: YES 0.17 00:01:06.461 Run-time dependency cmocka found: YES 1.1.7 00:01:06.461 Program pytest-3 found: NO 00:01:06.461 Program flake8 found: NO 00:01:06.461 Program misspell-fixer found: NO 00:01:06.461 Program restructuredtext-lint found: NO 00:01:06.461 Program valgrind found: YES (/usr/bin/valgrind) 00:01:06.461 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:06.461 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:06.461 Compiler for C supports arguments -Wwrite-strings: YES 00:01:06.461 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:06.461 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:06.461 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:06.461 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:06.461 Build targets in project: 8 00:01:06.461 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:06.461 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:06.461 00:01:06.461 libvfio-user 0.0.1 00:01:06.461 00:01:06.461 User defined options 00:01:06.461 buildtype : debug 00:01:06.461 default_library: shared 00:01:06.461 libdir : /usr/local/lib 00:01:06.461 00:01:06.461 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:07.407 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:07.675 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:07.675 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:07.675 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:07.675 [4/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:07.675 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:07.675 [6/37] Compiling C object samples/server.p/server.c.o 00:01:07.675 [7/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:07.675 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:07.675 [9/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:07.675 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:07.675 [11/37] Compiling C object samples/null.p/null.c.o 00:01:07.675 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:07.675 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:07.675 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:07.938 [15/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:07.938 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:07.938 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:07.938 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:07.938 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:07.938 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:07.938 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:07.938 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:07.938 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:07.938 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:07.938 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:07.938 [26/37] Compiling C object samples/client.p/client.c.o 00:01:07.938 [27/37] Linking target samples/client 00:01:07.938 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:08.204 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:08.204 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:08.204 [31/37] Linking target test/unit_tests 00:01:08.204 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:08.204 [33/37] Linking target samples/server 00:01:08.204 [34/37] Linking target samples/null 00:01:08.467 [35/37] Linking target samples/gpio-pci-idio-16 00:01:08.467 [36/37] Linking target samples/lspci 00:01:08.467 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:08.467 INFO: autodetecting backend as ninja 00:01:08.467 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:08.467 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:09.103 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:09.103 ninja: no work to do. 00:01:15.699 The Meson build system 00:01:15.699 Version: 1.3.1 00:01:15.699 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:15.699 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:15.699 Build type: native build 00:01:15.699 Program cat found: YES (/usr/bin/cat) 00:01:15.699 Project name: DPDK 00:01:15.699 Project version: 24.03.0 00:01:15.699 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:15.699 C linker for the host machine: cc ld.bfd 2.39-16 00:01:15.699 Host machine cpu family: x86_64 00:01:15.699 Host machine cpu: x86_64 00:01:15.699 Message: ## Building in Developer Mode ## 00:01:15.699 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:15.699 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:15.699 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:15.699 Program python3 found: YES (/usr/bin/python3) 00:01:15.699 Program cat found: YES (/usr/bin/cat) 00:01:15.699 Compiler for C supports arguments -march=native: YES 00:01:15.699 Checking for size of "void *" : 8 00:01:15.699 Checking for size of "void *" : 8 (cached) 00:01:15.699 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:15.699 Library m found: YES 00:01:15.699 Library numa found: YES 00:01:15.699 Has header "numaif.h" : YES 00:01:15.699 Library fdt found: NO 00:01:15.699 Library execinfo found: NO 00:01:15.699 Has header "execinfo.h" : YES 00:01:15.699 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:15.699 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:15.699 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:15.699 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:15.699 Run-time dependency openssl found: YES 3.0.9 00:01:15.699 Run-time dependency libpcap found: YES 1.10.4 00:01:15.699 Has header "pcap.h" with dependency libpcap: YES 00:01:15.699 Compiler for C supports arguments -Wcast-qual: YES 00:01:15.699 Compiler for C supports arguments -Wdeprecated: YES 00:01:15.699 Compiler for C supports arguments -Wformat: YES 00:01:15.699 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:15.699 Compiler for C supports arguments -Wformat-security: NO 00:01:15.699 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:15.699 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:15.699 Compiler for C supports arguments -Wnested-externs: YES 00:01:15.699 Compiler for C supports arguments -Wold-style-definition: YES 00:01:15.699 Compiler for C supports arguments -Wpointer-arith: YES 00:01:15.699 Compiler for C supports arguments -Wsign-compare: YES 00:01:15.699 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:15.699 Compiler for C supports arguments -Wundef: YES 00:01:15.699 Compiler for C supports arguments -Wwrite-strings: YES 00:01:15.699 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:15.699 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:15.699 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:15.699 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:15.699 Program objdump found: YES (/usr/bin/objdump) 00:01:15.699 Compiler for C supports arguments -mavx512f: YES 00:01:15.699 Checking if "AVX512 checking" compiles: YES 00:01:15.699 Fetching value of define "__SSE4_2__" : 1 00:01:15.699 Fetching value of define "__AES__" : 1 00:01:15.699 Fetching value of define "__AVX__" : 1 00:01:15.699 Fetching value of define "__AVX2__" : (undefined) 00:01:15.699 Fetching value of define "__AVX512BW__" : (undefined) 00:01:15.699 Fetching value of define "__AVX512CD__" : (undefined) 00:01:15.699 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:15.699 Fetching value of define "__AVX512F__" : (undefined) 00:01:15.699 Fetching value of define "__AVX512VL__" : (undefined) 00:01:15.699 Fetching value of define "__PCLMUL__" : 1 00:01:15.699 Fetching value of define "__RDRND__" : (undefined) 00:01:15.699 Fetching value of define "__RDSEED__" : (undefined) 00:01:15.699 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:15.699 Fetching value of define "__znver1__" : (undefined) 00:01:15.699 Fetching value of define "__znver2__" : (undefined) 00:01:15.699 Fetching value of define "__znver3__" : (undefined) 00:01:15.699 Fetching value of define "__znver4__" : (undefined) 00:01:15.699 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:15.699 Message: lib/log: Defining dependency "log" 00:01:15.699 Message: lib/kvargs: Defining dependency "kvargs" 00:01:15.699 Message: lib/telemetry: Defining dependency "telemetry" 00:01:15.699 Checking for function "getentropy" : NO 00:01:15.699 Message: lib/eal: Defining dependency "eal" 00:01:15.699 Message: lib/ring: Defining dependency "ring" 00:01:15.699 Message: lib/rcu: Defining dependency "rcu" 00:01:15.699 Message: lib/mempool: Defining dependency "mempool" 00:01:15.699 Message: lib/mbuf: Defining dependency "mbuf" 00:01:15.699 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:15.699 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:15.699 Compiler for C supports arguments -mpclmul: YES 00:01:15.699 Compiler for C supports arguments -maes: YES 00:01:15.699 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:15.699 Compiler for C supports arguments -mavx512bw: YES 00:01:15.699 Compiler for C supports arguments -mavx512dq: YES 00:01:15.699 Compiler for C supports arguments -mavx512vl: YES 00:01:15.699 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:15.699 Compiler for C supports arguments -mavx2: YES 00:01:15.699 Compiler for C supports arguments -mavx: YES 00:01:15.699 Message: lib/net: Defining dependency "net" 00:01:15.699 Message: lib/meter: Defining dependency "meter" 00:01:15.699 Message: lib/ethdev: Defining dependency "ethdev" 00:01:15.699 Message: lib/pci: Defining dependency "pci" 00:01:15.699 Message: lib/cmdline: Defining dependency "cmdline" 00:01:15.699 Message: lib/hash: Defining dependency "hash" 00:01:15.699 Message: lib/timer: Defining dependency "timer" 00:01:15.699 Message: lib/compressdev: Defining dependency "compressdev" 00:01:15.699 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:15.699 Message: lib/dmadev: Defining dependency "dmadev" 00:01:15.699 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:15.699 Message: lib/power: Defining dependency "power" 00:01:15.699 Message: lib/reorder: Defining dependency "reorder" 00:01:15.699 Message: lib/security: Defining dependency "security" 00:01:15.699 Has header "linux/userfaultfd.h" : YES 00:01:15.699 Has header "linux/vduse.h" : YES 00:01:15.699 Message: lib/vhost: Defining dependency "vhost" 00:01:15.699 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:15.699 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:15.699 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:15.699 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:15.699 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:15.699 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:15.699 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:15.699 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:15.699 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:15.699 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:15.699 Program doxygen found: YES (/usr/bin/doxygen) 00:01:15.699 Configuring doxy-api-html.conf using configuration 00:01:15.699 Configuring doxy-api-man.conf using configuration 00:01:15.699 Program mandb found: YES (/usr/bin/mandb) 00:01:15.699 Program sphinx-build found: NO 00:01:15.699 Configuring rte_build_config.h using configuration 00:01:15.699 Message: 00:01:15.699 ================= 00:01:15.699 Applications Enabled 00:01:15.699 ================= 00:01:15.699 00:01:15.699 apps: 00:01:15.699 00:01:15.699 00:01:15.699 Message: 00:01:15.699 ================= 00:01:15.699 Libraries Enabled 00:01:15.699 ================= 00:01:15.699 00:01:15.699 libs: 00:01:15.699 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:15.700 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:15.700 cryptodev, dmadev, power, reorder, security, vhost, 00:01:15.700 00:01:15.700 Message: 00:01:15.700 =============== 00:01:15.700 Drivers Enabled 00:01:15.700 =============== 00:01:15.700 00:01:15.700 common: 00:01:15.700 00:01:15.700 bus: 00:01:15.700 pci, vdev, 00:01:15.700 mempool: 00:01:15.700 ring, 00:01:15.700 dma: 00:01:15.700 00:01:15.700 net: 00:01:15.700 00:01:15.700 crypto: 00:01:15.700 00:01:15.700 compress: 00:01:15.700 00:01:15.700 vdpa: 00:01:15.700 00:01:15.700 00:01:15.700 Message: 00:01:15.700 ================= 00:01:15.700 Content Skipped 00:01:15.700 ================= 00:01:15.700 00:01:15.700 apps: 00:01:15.700 dumpcap: explicitly disabled via build config 00:01:15.700 graph: explicitly disabled via build config 00:01:15.700 pdump: explicitly disabled via build config 00:01:15.700 proc-info: explicitly disabled via build config 00:01:15.700 test-acl: explicitly disabled via build config 00:01:15.700 test-bbdev: explicitly disabled via build config 00:01:15.700 test-cmdline: explicitly disabled via build config 00:01:15.700 test-compress-perf: explicitly disabled via build config 00:01:15.700 test-crypto-perf: explicitly disabled via build config 00:01:15.700 test-dma-perf: explicitly disabled via build config 00:01:15.700 test-eventdev: explicitly disabled via build config 00:01:15.700 test-fib: explicitly disabled via build config 00:01:15.700 test-flow-perf: explicitly disabled via build config 00:01:15.700 test-gpudev: explicitly disabled via build config 00:01:15.700 test-mldev: explicitly disabled via build config 00:01:15.700 test-pipeline: explicitly disabled via build config 00:01:15.700 test-pmd: explicitly disabled via build config 00:01:15.700 test-regex: explicitly disabled via build config 00:01:15.700 test-sad: explicitly disabled via build config 00:01:15.700 test-security-perf: explicitly disabled via build config 00:01:15.700 00:01:15.700 libs: 00:01:15.700 argparse: explicitly disabled via build config 00:01:15.700 metrics: explicitly disabled via build config 00:01:15.700 acl: explicitly disabled via build config 00:01:15.700 bbdev: explicitly disabled via build config 00:01:15.700 bitratestats: explicitly disabled via build config 00:01:15.700 bpf: explicitly disabled via build config 00:01:15.700 cfgfile: explicitly disabled via build config 00:01:15.700 distributor: explicitly disabled via build config 00:01:15.700 efd: explicitly disabled via build config 00:01:15.700 eventdev: explicitly disabled via build config 00:01:15.700 dispatcher: explicitly disabled via build config 00:01:15.700 gpudev: explicitly disabled via build config 00:01:15.700 gro: explicitly disabled via build config 00:01:15.700 gso: explicitly disabled via build config 00:01:15.700 ip_frag: explicitly disabled via build config 00:01:15.700 jobstats: explicitly disabled via build config 00:01:15.700 latencystats: explicitly disabled via build config 00:01:15.700 lpm: explicitly disabled via build config 00:01:15.700 member: explicitly disabled via build config 00:01:15.700 pcapng: explicitly disabled via build config 00:01:15.700 rawdev: explicitly disabled via build config 00:01:15.700 regexdev: explicitly disabled via build config 00:01:15.700 mldev: explicitly disabled via build config 00:01:15.700 rib: explicitly disabled via build config 00:01:15.700 sched: explicitly disabled via build config 00:01:15.700 stack: explicitly disabled via build config 00:01:15.700 ipsec: explicitly disabled via build config 00:01:15.700 pdcp: explicitly disabled via build config 00:01:15.700 fib: explicitly disabled via build config 00:01:15.700 port: explicitly disabled via build config 00:01:15.700 pdump: explicitly disabled via build config 00:01:15.700 table: explicitly disabled via build config 00:01:15.700 pipeline: explicitly disabled via build config 00:01:15.700 graph: explicitly disabled via build config 00:01:15.700 node: explicitly disabled via build config 00:01:15.700 00:01:15.700 drivers: 00:01:15.700 common/cpt: not in enabled drivers build config 00:01:15.700 common/dpaax: not in enabled drivers build config 00:01:15.700 common/iavf: not in enabled drivers build config 00:01:15.700 common/idpf: not in enabled drivers build config 00:01:15.700 common/ionic: not in enabled drivers build config 00:01:15.700 common/mvep: not in enabled drivers build config 00:01:15.700 common/octeontx: not in enabled drivers build config 00:01:15.700 bus/auxiliary: not in enabled drivers build config 00:01:15.700 bus/cdx: not in enabled drivers build config 00:01:15.700 bus/dpaa: not in enabled drivers build config 00:01:15.700 bus/fslmc: not in enabled drivers build config 00:01:15.700 bus/ifpga: not in enabled drivers build config 00:01:15.700 bus/platform: not in enabled drivers build config 00:01:15.700 bus/uacce: not in enabled drivers build config 00:01:15.700 bus/vmbus: not in enabled drivers build config 00:01:15.700 common/cnxk: not in enabled drivers build config 00:01:15.700 common/mlx5: not in enabled drivers build config 00:01:15.700 common/nfp: not in enabled drivers build config 00:01:15.700 common/nitrox: not in enabled drivers build config 00:01:15.700 common/qat: not in enabled drivers build config 00:01:15.700 common/sfc_efx: not in enabled drivers build config 00:01:15.700 mempool/bucket: not in enabled drivers build config 00:01:15.700 mempool/cnxk: not in enabled drivers build config 00:01:15.700 mempool/dpaa: not in enabled drivers build config 00:01:15.700 mempool/dpaa2: not in enabled drivers build config 00:01:15.700 mempool/octeontx: not in enabled drivers build config 00:01:15.700 mempool/stack: not in enabled drivers build config 00:01:15.700 dma/cnxk: not in enabled drivers build config 00:01:15.700 dma/dpaa: not in enabled drivers build config 00:01:15.700 dma/dpaa2: not in enabled drivers build config 00:01:15.700 dma/hisilicon: not in enabled drivers build config 00:01:15.700 dma/idxd: not in enabled drivers build config 00:01:15.700 dma/ioat: not in enabled drivers build config 00:01:15.700 dma/skeleton: not in enabled drivers build config 00:01:15.700 net/af_packet: not in enabled drivers build config 00:01:15.700 net/af_xdp: not in enabled drivers build config 00:01:15.700 net/ark: not in enabled drivers build config 00:01:15.700 net/atlantic: not in enabled drivers build config 00:01:15.700 net/avp: not in enabled drivers build config 00:01:15.700 net/axgbe: not in enabled drivers build config 00:01:15.700 net/bnx2x: not in enabled drivers build config 00:01:15.700 net/bnxt: not in enabled drivers build config 00:01:15.700 net/bonding: not in enabled drivers build config 00:01:15.700 net/cnxk: not in enabled drivers build config 00:01:15.700 net/cpfl: not in enabled drivers build config 00:01:15.700 net/cxgbe: not in enabled drivers build config 00:01:15.700 net/dpaa: not in enabled drivers build config 00:01:15.700 net/dpaa2: not in enabled drivers build config 00:01:15.700 net/e1000: not in enabled drivers build config 00:01:15.700 net/ena: not in enabled drivers build config 00:01:15.700 net/enetc: not in enabled drivers build config 00:01:15.700 net/enetfec: not in enabled drivers build config 00:01:15.700 net/enic: not in enabled drivers build config 00:01:15.700 net/failsafe: not in enabled drivers build config 00:01:15.700 net/fm10k: not in enabled drivers build config 00:01:15.700 net/gve: not in enabled drivers build config 00:01:15.700 net/hinic: not in enabled drivers build config 00:01:15.700 net/hns3: not in enabled drivers build config 00:01:15.700 net/i40e: not in enabled drivers build config 00:01:15.700 net/iavf: not in enabled drivers build config 00:01:15.700 net/ice: not in enabled drivers build config 00:01:15.700 net/idpf: not in enabled drivers build config 00:01:15.700 net/igc: not in enabled drivers build config 00:01:15.700 net/ionic: not in enabled drivers build config 00:01:15.700 net/ipn3ke: not in enabled drivers build config 00:01:15.700 net/ixgbe: not in enabled drivers build config 00:01:15.700 net/mana: not in enabled drivers build config 00:01:15.700 net/memif: not in enabled drivers build config 00:01:15.700 net/mlx4: not in enabled drivers build config 00:01:15.700 net/mlx5: not in enabled drivers build config 00:01:15.700 net/mvneta: not in enabled drivers build config 00:01:15.700 net/mvpp2: not in enabled drivers build config 00:01:15.700 net/netvsc: not in enabled drivers build config 00:01:15.700 net/nfb: not in enabled drivers build config 00:01:15.700 net/nfp: not in enabled drivers build config 00:01:15.700 net/ngbe: not in enabled drivers build config 00:01:15.700 net/null: not in enabled drivers build config 00:01:15.700 net/octeontx: not in enabled drivers build config 00:01:15.700 net/octeon_ep: not in enabled drivers build config 00:01:15.700 net/pcap: not in enabled drivers build config 00:01:15.700 net/pfe: not in enabled drivers build config 00:01:15.700 net/qede: not in enabled drivers build config 00:01:15.700 net/ring: not in enabled drivers build config 00:01:15.700 net/sfc: not in enabled drivers build config 00:01:15.700 net/softnic: not in enabled drivers build config 00:01:15.700 net/tap: not in enabled drivers build config 00:01:15.700 net/thunderx: not in enabled drivers build config 00:01:15.700 net/txgbe: not in enabled drivers build config 00:01:15.700 net/vdev_netvsc: not in enabled drivers build config 00:01:15.700 net/vhost: not in enabled drivers build config 00:01:15.700 net/virtio: not in enabled drivers build config 00:01:15.700 net/vmxnet3: not in enabled drivers build config 00:01:15.700 raw/*: missing internal dependency, "rawdev" 00:01:15.700 crypto/armv8: not in enabled drivers build config 00:01:15.700 crypto/bcmfs: not in enabled drivers build config 00:01:15.700 crypto/caam_jr: not in enabled drivers build config 00:01:15.700 crypto/ccp: not in enabled drivers build config 00:01:15.700 crypto/cnxk: not in enabled drivers build config 00:01:15.700 crypto/dpaa_sec: not in enabled drivers build config 00:01:15.700 crypto/dpaa2_sec: not in enabled drivers build config 00:01:15.700 crypto/ipsec_mb: not in enabled drivers build config 00:01:15.700 crypto/mlx5: not in enabled drivers build config 00:01:15.700 crypto/mvsam: not in enabled drivers build config 00:01:15.700 crypto/nitrox: not in enabled drivers build config 00:01:15.700 crypto/null: not in enabled drivers build config 00:01:15.701 crypto/octeontx: not in enabled drivers build config 00:01:15.701 crypto/openssl: not in enabled drivers build config 00:01:15.701 crypto/scheduler: not in enabled drivers build config 00:01:15.701 crypto/uadk: not in enabled drivers build config 00:01:15.701 crypto/virtio: not in enabled drivers build config 00:01:15.701 compress/isal: not in enabled drivers build config 00:01:15.701 compress/mlx5: not in enabled drivers build config 00:01:15.701 compress/nitrox: not in enabled drivers build config 00:01:15.701 compress/octeontx: not in enabled drivers build config 00:01:15.701 compress/zlib: not in enabled drivers build config 00:01:15.701 regex/*: missing internal dependency, "regexdev" 00:01:15.701 ml/*: missing internal dependency, "mldev" 00:01:15.701 vdpa/ifc: not in enabled drivers build config 00:01:15.701 vdpa/mlx5: not in enabled drivers build config 00:01:15.701 vdpa/nfp: not in enabled drivers build config 00:01:15.701 vdpa/sfc: not in enabled drivers build config 00:01:15.701 event/*: missing internal dependency, "eventdev" 00:01:15.701 baseband/*: missing internal dependency, "bbdev" 00:01:15.701 gpu/*: missing internal dependency, "gpudev" 00:01:15.701 00:01:15.701 00:01:15.701 Build targets in project: 85 00:01:15.701 00:01:15.701 DPDK 24.03.0 00:01:15.701 00:01:15.701 User defined options 00:01:15.701 buildtype : debug 00:01:15.701 default_library : shared 00:01:15.701 libdir : lib 00:01:15.701 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:15.701 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:15.701 c_link_args : 00:01:15.701 cpu_instruction_set: native 00:01:15.701 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:15.701 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:15.701 enable_docs : false 00:01:15.701 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:15.701 enable_kmods : false 00:01:15.701 max_lcores : 128 00:01:15.701 tests : false 00:01:15.701 00:01:15.701 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:15.968 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:15.968 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:15.968 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:15.968 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:15.968 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:15.968 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:16.229 [6/268] Linking static target lib/librte_kvargs.a 00:01:16.229 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:16.229 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:16.229 [9/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:16.229 [10/268] Linking static target lib/librte_log.a 00:01:16.229 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:16.229 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:16.229 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:16.229 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:16.797 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:16.797 [16/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.797 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:16.797 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:16.797 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:16.797 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:16.797 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:16.797 [22/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:17.058 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:17.058 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:17.058 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:17.058 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:17.058 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:17.058 [28/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:17.058 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:17.058 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:17.058 [31/268] Linking static target lib/librte_telemetry.a 00:01:17.058 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:17.058 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:17.058 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:17.058 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:17.058 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:17.058 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:17.058 [38/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:17.058 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:17.058 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:17.058 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:17.058 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:17.058 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:17.319 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:17.319 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:17.319 [46/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.319 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:17.319 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:17.319 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:17.319 [50/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:17.580 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:17.580 [52/268] Linking target lib/librte_log.so.24.1 00:01:17.580 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:17.847 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:17.847 [55/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:17.848 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:17.848 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:17.848 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:17.848 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:17.848 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:17.848 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:17.848 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:17.848 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:17.848 [64/268] Linking target lib/librte_kvargs.so.24.1 00:01:17.848 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:18.108 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:18.108 [67/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.108 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:18.108 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:18.108 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:18.108 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:18.108 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:18.108 [73/268] Linking target lib/librte_telemetry.so.24.1 00:01:18.108 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:18.108 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:18.108 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:18.108 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:18.372 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:18.372 [79/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:18.372 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:18.372 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:18.372 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:18.372 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:18.637 [84/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:18.637 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:18.637 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:18.637 [87/268] Linking static target lib/librte_eal.a 00:01:18.637 [88/268] Linking static target lib/librte_ring.a 00:01:18.637 [89/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:18.637 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:18.637 [91/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:18.637 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:18.895 [93/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:18.895 [94/268] Linking static target lib/librte_rcu.a 00:01:18.895 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:18.895 [96/268] Linking static target lib/librte_mempool.a 00:01:18.895 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:18.895 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:18.895 [99/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:18.895 [100/268] Linking static target lib/librte_pci.a 00:01:18.895 [101/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:18.895 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:18.895 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:18.895 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:19.157 [105/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:19.157 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:19.157 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:19.157 [108/268] Linking static target lib/librte_meter.a 00:01:19.157 [109/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:19.157 [110/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:19.157 [111/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:19.157 [112/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:19.157 [113/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.157 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:19.157 [115/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:19.157 [116/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.157 [117/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:19.416 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:19.416 [119/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.416 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:19.416 [121/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:19.417 [122/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:19.417 [123/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:19.417 [124/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:19.417 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:19.417 [126/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.417 [127/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:19.417 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:19.417 [129/268] Linking static target lib/librte_net.a 00:01:19.417 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:19.682 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:19.682 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:19.682 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:19.682 [134/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:19.682 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:19.682 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:19.682 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:19.939 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:19.939 [139/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:19.939 [140/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:19.939 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:19.939 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:19.939 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:20.201 [144/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:20.201 [145/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.201 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:20.201 [147/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.201 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:20.201 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:20.201 [150/268] Linking static target lib/librte_cmdline.a 00:01:20.201 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:20.461 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:20.461 [153/268] Linking static target lib/librte_timer.a 00:01:20.461 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:20.461 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:20.461 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:20.461 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:20.719 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:20.719 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:20.719 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:20.719 [161/268] Linking static target lib/librte_dmadev.a 00:01:20.719 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:20.719 [163/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:20.981 [164/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:20.981 [165/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:20.981 [166/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:20.981 [167/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:20.981 [168/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:20.981 [169/268] Linking static target lib/librte_compressdev.a 00:01:20.981 [170/268] Linking static target lib/librte_mbuf.a 00:01:20.981 [171/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:20.981 [172/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:20.981 [173/268] Linking static target lib/librte_hash.a 00:01:20.981 [174/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:20.981 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:20.981 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:20.981 [177/268] Linking static target lib/librte_power.a 00:01:21.239 [178/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.239 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:21.239 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:21.497 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:21.497 [182/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:21.497 [183/268] Linking static target lib/librte_reorder.a 00:01:21.497 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.497 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:21.497 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:21.497 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:21.753 [188/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.753 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:21.753 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:21.753 [191/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.753 [192/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.753 [193/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:21.753 [194/268] Linking static target lib/librte_security.a 00:01:21.753 [195/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:21.753 [196/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:21.753 [197/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:21.753 [198/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:21.753 [199/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.753 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:21.753 [201/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:21.753 [202/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.753 [203/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.753 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:21.753 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:22.010 [206/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:22.010 [207/268] Linking static target lib/librte_ethdev.a 00:01:22.010 [208/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:22.010 [209/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:22.010 [210/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:22.010 [211/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:22.010 [212/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:22.010 [213/268] Linking static target drivers/librte_bus_vdev.a 00:01:22.010 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:22.010 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:22.010 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:22.010 [217/268] Linking static target drivers/librte_bus_pci.a 00:01:22.010 [218/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:22.010 [219/268] Linking static target lib/librte_cryptodev.a 00:01:22.010 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.267 [221/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:22.267 [222/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:22.267 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:22.267 [224/268] Linking static target drivers/librte_mempool_ring.a 00:01:22.267 [225/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.524 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.087 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.459 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:25.829 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.830 [230/268] Linking target lib/librte_eal.so.24.1 00:01:25.830 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.830 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:25.830 [233/268] Linking target lib/librte_ring.so.24.1 00:01:25.830 [234/268] Linking target lib/librte_meter.so.24.1 00:01:25.830 [235/268] Linking target lib/librte_pci.so.24.1 00:01:25.830 [236/268] Linking target lib/librte_dmadev.so.24.1 00:01:25.830 [237/268] Linking target lib/librte_timer.so.24.1 00:01:25.830 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:25.830 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:25.830 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:25.830 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:25.830 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:25.830 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:26.088 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:26.088 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:26.088 [246/268] Linking target lib/librte_mempool.so.24.1 00:01:26.088 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:26.088 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:26.088 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:26.088 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:26.345 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:26.346 [252/268] Linking target lib/librte_net.so.24.1 00:01:26.346 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:01:26.346 [254/268] Linking target lib/librte_reorder.so.24.1 00:01:26.346 [255/268] Linking target lib/librte_compressdev.so.24.1 00:01:26.346 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:26.346 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:26.346 [258/268] Linking target lib/librte_hash.so.24.1 00:01:26.346 [259/268] Linking target lib/librte_cmdline.so.24.1 00:01:26.346 [260/268] Linking target lib/librte_security.so.24.1 00:01:26.604 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:26.604 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:26.604 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:26.604 [264/268] Linking target lib/librte_power.so.24.1 00:01:30.789 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:30.789 [266/268] Linking static target lib/librte_vhost.a 00:01:31.726 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.726 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:31.726 INFO: autodetecting backend as ninja 00:01:31.726 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 32 00:01:32.662 CC lib/ut/ut.o 00:01:32.662 CC lib/log/log.o 00:01:32.662 CC lib/ut_mock/mock.o 00:01:32.662 CC lib/log/log_flags.o 00:01:32.662 CC lib/log/log_deprecated.o 00:01:32.919 LIB libspdk_log.a 00:01:32.919 LIB libspdk_ut.a 00:01:32.919 LIB libspdk_ut_mock.a 00:01:32.919 SO libspdk_ut.so.2.0 00:01:32.919 SO libspdk_ut_mock.so.6.0 00:01:32.919 SO libspdk_log.so.7.0 00:01:32.919 SYMLINK libspdk_ut.so 00:01:32.919 SYMLINK libspdk_ut_mock.so 00:01:32.919 SYMLINK libspdk_log.so 00:01:33.188 CC lib/util/base64.o 00:01:33.188 CXX lib/trace_parser/trace.o 00:01:33.188 CC lib/ioat/ioat.o 00:01:33.188 CC lib/dma/dma.o 00:01:33.188 CC lib/util/bit_array.o 00:01:33.188 CC lib/util/cpuset.o 00:01:33.189 CC lib/util/crc16.o 00:01:33.189 CC lib/util/crc32.o 00:01:33.189 CC lib/util/crc32c.o 00:01:33.189 CC lib/util/crc32_ieee.o 00:01:33.189 CC lib/util/crc64.o 00:01:33.189 CC lib/util/dif.o 00:01:33.189 CC lib/util/fd.o 00:01:33.189 CC lib/util/fd_group.o 00:01:33.189 CC lib/util/file.o 00:01:33.189 CC lib/util/hexlify.o 00:01:33.189 CC lib/util/iov.o 00:01:33.189 CC lib/util/math.o 00:01:33.189 CC lib/util/net.o 00:01:33.189 CC lib/util/pipe.o 00:01:33.189 CC lib/util/strerror_tls.o 00:01:33.189 CC lib/util/string.o 00:01:33.189 CC lib/util/uuid.o 00:01:33.189 CC lib/util/xor.o 00:01:33.189 CC lib/util/zipf.o 00:01:33.189 CC lib/vfio_user/host/vfio_user_pci.o 00:01:33.189 CC lib/vfio_user/host/vfio_user.o 00:01:33.447 LIB libspdk_dma.a 00:01:33.447 SO libspdk_dma.so.4.0 00:01:33.447 LIB libspdk_ioat.a 00:01:33.447 SYMLINK libspdk_dma.so 00:01:33.447 SO libspdk_ioat.so.7.0 00:01:33.705 LIB libspdk_vfio_user.a 00:01:33.705 SO libspdk_vfio_user.so.5.0 00:01:33.705 SYMLINK libspdk_ioat.so 00:01:33.705 SYMLINK libspdk_vfio_user.so 00:01:33.705 LIB libspdk_util.a 00:01:33.963 SO libspdk_util.so.10.0 00:01:33.963 SYMLINK libspdk_util.so 00:01:34.221 CC lib/json/json_parse.o 00:01:34.221 CC lib/json/json_util.o 00:01:34.221 CC lib/json/json_write.o 00:01:34.221 CC lib/rdma_utils/rdma_utils.o 00:01:34.221 CC lib/conf/conf.o 00:01:34.221 CC lib/idxd/idxd.o 00:01:34.221 CC lib/idxd/idxd_user.o 00:01:34.221 CC lib/idxd/idxd_kernel.o 00:01:34.221 CC lib/env_dpdk/env.o 00:01:34.221 CC lib/env_dpdk/memory.o 00:01:34.221 CC lib/env_dpdk/pci.o 00:01:34.221 CC lib/vmd/vmd.o 00:01:34.221 CC lib/vmd/led.o 00:01:34.221 CC lib/env_dpdk/init.o 00:01:34.221 CC lib/rdma_provider/common.o 00:01:34.221 CC lib/env_dpdk/threads.o 00:01:34.221 CC lib/env_dpdk/pci_ioat.o 00:01:34.221 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:34.221 CC lib/env_dpdk/pci_virtio.o 00:01:34.221 CC lib/env_dpdk/pci_vmd.o 00:01:34.221 CC lib/env_dpdk/pci_idxd.o 00:01:34.221 CC lib/env_dpdk/sigbus_handler.o 00:01:34.221 CC lib/env_dpdk/pci_event.o 00:01:34.221 CC lib/env_dpdk/pci_dpdk.o 00:01:34.221 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:34.221 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:34.479 LIB libspdk_trace_parser.a 00:01:34.479 SO libspdk_trace_parser.so.5.0 00:01:34.479 LIB libspdk_rdma_provider.a 00:01:34.479 LIB libspdk_json.a 00:01:34.479 SO libspdk_rdma_provider.so.6.0 00:01:34.479 SO libspdk_json.so.6.0 00:01:34.479 LIB libspdk_rdma_utils.a 00:01:34.479 LIB libspdk_conf.a 00:01:34.479 SO libspdk_rdma_utils.so.1.0 00:01:34.479 SYMLINK libspdk_trace_parser.so 00:01:34.479 SYMLINK libspdk_rdma_provider.so 00:01:34.479 SO libspdk_conf.so.6.0 00:01:34.479 SYMLINK libspdk_json.so 00:01:34.737 SYMLINK libspdk_rdma_utils.so 00:01:34.737 SYMLINK libspdk_conf.so 00:01:34.737 CC lib/jsonrpc/jsonrpc_server.o 00:01:34.737 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:34.737 CC lib/jsonrpc/jsonrpc_client.o 00:01:34.737 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:34.737 LIB libspdk_idxd.a 00:01:34.995 SO libspdk_idxd.so.12.0 00:01:34.995 LIB libspdk_vmd.a 00:01:34.995 SYMLINK libspdk_idxd.so 00:01:34.995 SO libspdk_vmd.so.6.0 00:01:34.995 SYMLINK libspdk_vmd.so 00:01:34.995 LIB libspdk_jsonrpc.a 00:01:34.995 SO libspdk_jsonrpc.so.6.0 00:01:35.253 SYMLINK libspdk_jsonrpc.so 00:01:35.253 CC lib/rpc/rpc.o 00:01:35.511 LIB libspdk_rpc.a 00:01:35.511 SO libspdk_rpc.so.6.0 00:01:35.769 SYMLINK libspdk_rpc.so 00:01:35.769 CC lib/notify/notify.o 00:01:35.769 CC lib/notify/notify_rpc.o 00:01:35.769 CC lib/keyring/keyring.o 00:01:35.769 CC lib/keyring/keyring_rpc.o 00:01:35.769 CC lib/trace/trace.o 00:01:35.769 CC lib/trace/trace_flags.o 00:01:35.769 CC lib/trace/trace_rpc.o 00:01:36.027 LIB libspdk_notify.a 00:01:36.027 SO libspdk_notify.so.6.0 00:01:36.027 SYMLINK libspdk_notify.so 00:01:36.027 LIB libspdk_keyring.a 00:01:36.027 LIB libspdk_trace.a 00:01:36.027 SO libspdk_keyring.so.1.0 00:01:36.027 SO libspdk_trace.so.10.0 00:01:36.284 SYMLINK libspdk_keyring.so 00:01:36.284 SYMLINK libspdk_trace.so 00:01:36.284 LIB libspdk_env_dpdk.a 00:01:36.284 CC lib/sock/sock.o 00:01:36.284 CC lib/thread/thread.o 00:01:36.284 CC lib/sock/sock_rpc.o 00:01:36.284 CC lib/thread/iobuf.o 00:01:36.284 SO libspdk_env_dpdk.so.15.0 00:01:36.543 SYMLINK libspdk_env_dpdk.so 00:01:36.802 LIB libspdk_sock.a 00:01:36.802 SO libspdk_sock.so.10.0 00:01:36.802 SYMLINK libspdk_sock.so 00:01:37.088 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:37.088 CC lib/nvme/nvme_ctrlr.o 00:01:37.088 CC lib/nvme/nvme_fabric.o 00:01:37.088 CC lib/nvme/nvme_ns_cmd.o 00:01:37.088 CC lib/nvme/nvme_ns.o 00:01:37.088 CC lib/nvme/nvme_pcie_common.o 00:01:37.088 CC lib/nvme/nvme_pcie.o 00:01:37.088 CC lib/nvme/nvme_qpair.o 00:01:37.088 CC lib/nvme/nvme.o 00:01:37.088 CC lib/nvme/nvme_quirks.o 00:01:37.088 CC lib/nvme/nvme_transport.o 00:01:37.088 CC lib/nvme/nvme_discovery.o 00:01:37.088 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:37.088 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:37.088 CC lib/nvme/nvme_tcp.o 00:01:37.088 CC lib/nvme/nvme_io_msg.o 00:01:37.088 CC lib/nvme/nvme_opal.o 00:01:37.088 CC lib/nvme/nvme_poll_group.o 00:01:37.088 CC lib/nvme/nvme_zns.o 00:01:37.088 CC lib/nvme/nvme_stubs.o 00:01:37.088 CC lib/nvme/nvme_auth.o 00:01:37.088 CC lib/nvme/nvme_cuse.o 00:01:37.088 CC lib/nvme/nvme_vfio_user.o 00:01:37.088 CC lib/nvme/nvme_rdma.o 00:01:38.073 LIB libspdk_thread.a 00:01:38.331 SO libspdk_thread.so.10.1 00:01:38.331 SYMLINK libspdk_thread.so 00:01:38.331 CC lib/init/json_config.o 00:01:38.331 CC lib/accel/accel.o 00:01:38.331 CC lib/accel/accel_rpc.o 00:01:38.331 CC lib/init/subsystem.o 00:01:38.331 CC lib/virtio/virtio.o 00:01:38.331 CC lib/accel/accel_sw.o 00:01:38.331 CC lib/init/subsystem_rpc.o 00:01:38.331 CC lib/blob/blobstore.o 00:01:38.331 CC lib/virtio/virtio_vhost_user.o 00:01:38.331 CC lib/virtio/virtio_vfio_user.o 00:01:38.331 CC lib/blob/request.o 00:01:38.331 CC lib/init/rpc.o 00:01:38.331 CC lib/blob/zeroes.o 00:01:38.331 CC lib/virtio/virtio_pci.o 00:01:38.331 CC lib/blob/blob_bs_dev.o 00:01:38.331 CC lib/vfu_tgt/tgt_endpoint.o 00:01:38.331 CC lib/vfu_tgt/tgt_rpc.o 00:01:38.897 LIB libspdk_init.a 00:01:38.897 SO libspdk_init.so.5.0 00:01:38.897 LIB libspdk_vfu_tgt.a 00:01:38.897 SYMLINK libspdk_init.so 00:01:38.897 SO libspdk_vfu_tgt.so.3.0 00:01:38.897 LIB libspdk_virtio.a 00:01:38.897 SYMLINK libspdk_vfu_tgt.so 00:01:38.897 SO libspdk_virtio.so.7.0 00:01:39.155 CC lib/event/app.o 00:01:39.155 CC lib/event/log_rpc.o 00:01:39.155 CC lib/event/reactor.o 00:01:39.155 CC lib/event/app_rpc.o 00:01:39.155 CC lib/event/scheduler_static.o 00:01:39.155 SYMLINK libspdk_virtio.so 00:01:39.412 LIB libspdk_accel.a 00:01:39.412 SO libspdk_accel.so.16.0 00:01:39.412 LIB libspdk_event.a 00:01:39.671 SO libspdk_event.so.14.0 00:01:39.671 SYMLINK libspdk_accel.so 00:01:39.671 SYMLINK libspdk_event.so 00:01:39.671 CC lib/bdev/bdev.o 00:01:39.671 CC lib/bdev/bdev_rpc.o 00:01:39.671 CC lib/bdev/bdev_zone.o 00:01:39.671 CC lib/bdev/part.o 00:01:39.671 CC lib/bdev/scsi_nvme.o 00:01:40.237 LIB libspdk_nvme.a 00:01:40.237 SO libspdk_nvme.so.13.1 00:01:40.495 SYMLINK libspdk_nvme.so 00:01:41.428 LIB libspdk_blob.a 00:01:41.428 SO libspdk_blob.so.11.0 00:01:41.686 SYMLINK libspdk_blob.so 00:01:41.686 CC lib/lvol/lvol.o 00:01:41.686 CC lib/blobfs/blobfs.o 00:01:41.686 CC lib/blobfs/tree.o 00:01:42.618 LIB libspdk_blobfs.a 00:01:42.618 SO libspdk_blobfs.so.10.0 00:01:42.618 LIB libspdk_bdev.a 00:01:42.618 SYMLINK libspdk_blobfs.so 00:01:42.618 SO libspdk_bdev.so.16.0 00:01:42.618 LIB libspdk_lvol.a 00:01:42.618 SO libspdk_lvol.so.10.0 00:01:42.618 SYMLINK libspdk_bdev.so 00:01:42.618 SYMLINK libspdk_lvol.so 00:01:42.880 CC lib/ublk/ublk.o 00:01:42.880 CC lib/nbd/nbd.o 00:01:42.880 CC lib/nvmf/ctrlr.o 00:01:42.880 CC lib/ublk/ublk_rpc.o 00:01:42.880 CC lib/nbd/nbd_rpc.o 00:01:42.880 CC lib/nvmf/ctrlr_discovery.o 00:01:42.880 CC lib/nvmf/ctrlr_bdev.o 00:01:42.880 CC lib/scsi/dev.o 00:01:42.880 CC lib/ftl/ftl_core.o 00:01:42.880 CC lib/scsi/lun.o 00:01:42.880 CC lib/nvmf/subsystem.o 00:01:42.880 CC lib/scsi/port.o 00:01:42.880 CC lib/nvmf/nvmf.o 00:01:42.880 CC lib/ftl/ftl_init.o 00:01:42.880 CC lib/ftl/ftl_layout.o 00:01:42.880 CC lib/ftl/ftl_debug.o 00:01:42.880 CC lib/nvmf/nvmf_rpc.o 00:01:42.880 CC lib/scsi/scsi.o 00:01:42.880 CC lib/scsi/scsi_bdev.o 00:01:42.880 CC lib/ftl/ftl_io.o 00:01:42.880 CC lib/scsi/scsi_pr.o 00:01:42.880 CC lib/nvmf/transport.o 00:01:42.880 CC lib/ftl/ftl_sb.o 00:01:42.880 CC lib/scsi/scsi_rpc.o 00:01:42.880 CC lib/nvmf/tcp.o 00:01:42.880 CC lib/ftl/ftl_l2p.o 00:01:42.880 CC lib/nvmf/stubs.o 00:01:42.880 CC lib/ftl/ftl_l2p_flat.o 00:01:42.880 CC lib/scsi/task.o 00:01:42.880 CC lib/nvmf/mdns_server.o 00:01:42.880 CC lib/ftl/ftl_nv_cache.o 00:01:42.880 CC lib/nvmf/vfio_user.o 00:01:43.138 CC lib/ftl/ftl_band.o 00:01:43.138 CC lib/nvmf/rdma.o 00:01:43.138 CC lib/ftl/ftl_band_ops.o 00:01:43.138 CC lib/nvmf/auth.o 00:01:43.138 CC lib/ftl/ftl_writer.o 00:01:43.398 CC lib/ftl/ftl_rq.o 00:01:43.398 CC lib/ftl/ftl_reloc.o 00:01:43.398 CC lib/ftl/ftl_l2p_cache.o 00:01:43.398 CC lib/ftl/ftl_p2l.o 00:01:43.398 CC lib/ftl/mngt/ftl_mngt.o 00:01:43.398 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:43.398 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:43.398 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:43.398 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:43.398 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:43.398 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:43.660 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:43.660 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:43.660 LIB libspdk_nbd.a 00:01:43.660 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:43.660 SO libspdk_nbd.so.7.0 00:01:43.660 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:43.660 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:43.660 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:43.660 CC lib/ftl/utils/ftl_conf.o 00:01:43.660 CC lib/ftl/utils/ftl_md.o 00:01:43.922 LIB libspdk_scsi.a 00:01:43.922 SYMLINK libspdk_nbd.so 00:01:43.922 CC lib/ftl/utils/ftl_mempool.o 00:01:43.922 CC lib/ftl/utils/ftl_bitmap.o 00:01:43.922 CC lib/ftl/utils/ftl_property.o 00:01:43.922 SO libspdk_scsi.so.9.0 00:01:43.922 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:43.922 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:43.922 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:43.922 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:43.922 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:43.922 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:43.922 LIB libspdk_ublk.a 00:01:43.922 SYMLINK libspdk_scsi.so 00:01:43.922 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:43.922 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:44.181 SO libspdk_ublk.so.3.0 00:01:44.181 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:44.181 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:44.181 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:44.181 SYMLINK libspdk_ublk.so 00:01:44.181 CC lib/ftl/base/ftl_base_dev.o 00:01:44.181 CC lib/ftl/base/ftl_base_bdev.o 00:01:44.181 CC lib/ftl/ftl_trace.o 00:01:44.439 CC lib/iscsi/conn.o 00:01:44.439 CC lib/iscsi/init_grp.o 00:01:44.439 CC lib/iscsi/iscsi.o 00:01:44.439 CC lib/iscsi/md5.o 00:01:44.439 CC lib/iscsi/param.o 00:01:44.439 CC lib/iscsi/portal_grp.o 00:01:44.439 CC lib/iscsi/tgt_node.o 00:01:44.439 CC lib/iscsi/iscsi_subsystem.o 00:01:44.439 CC lib/iscsi/task.o 00:01:44.439 CC lib/iscsi/iscsi_rpc.o 00:01:44.439 CC lib/vhost/vhost.o 00:01:44.439 CC lib/vhost/vhost_rpc.o 00:01:44.439 CC lib/vhost/vhost_scsi.o 00:01:44.439 CC lib/vhost/vhost_blk.o 00:01:44.439 CC lib/vhost/rte_vhost_user.o 00:01:44.698 LIB libspdk_ftl.a 00:01:44.956 SO libspdk_ftl.so.9.0 00:01:45.214 SYMLINK libspdk_ftl.so 00:01:45.778 LIB libspdk_vhost.a 00:01:45.778 SO libspdk_vhost.so.8.0 00:01:45.778 SYMLINK libspdk_vhost.so 00:01:46.036 LIB libspdk_iscsi.a 00:01:46.036 SO libspdk_iscsi.so.8.0 00:01:46.036 LIB libspdk_nvmf.a 00:01:46.036 SYMLINK libspdk_iscsi.so 00:01:46.036 SO libspdk_nvmf.so.19.0 00:01:46.296 SYMLINK libspdk_nvmf.so 00:01:46.554 CC module/env_dpdk/env_dpdk_rpc.o 00:01:46.554 CC module/vfu_device/vfu_virtio.o 00:01:46.554 CC module/vfu_device/vfu_virtio_blk.o 00:01:46.554 CC module/vfu_device/vfu_virtio_scsi.o 00:01:46.554 CC module/vfu_device/vfu_virtio_rpc.o 00:01:46.813 CC module/blob/bdev/blob_bdev.o 00:01:46.813 CC module/keyring/linux/keyring.o 00:01:46.813 CC module/keyring/file/keyring.o 00:01:46.813 CC module/accel/ioat/accel_ioat.o 00:01:46.813 CC module/keyring/linux/keyring_rpc.o 00:01:46.813 CC module/accel/ioat/accel_ioat_rpc.o 00:01:46.813 CC module/accel/dsa/accel_dsa.o 00:01:46.813 CC module/keyring/file/keyring_rpc.o 00:01:46.813 CC module/accel/dsa/accel_dsa_rpc.o 00:01:46.813 CC module/scheduler/gscheduler/gscheduler.o 00:01:46.813 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:46.813 CC module/accel/iaa/accel_iaa.o 00:01:46.813 CC module/accel/iaa/accel_iaa_rpc.o 00:01:46.813 CC module/accel/error/accel_error.o 00:01:46.813 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:46.813 CC module/accel/error/accel_error_rpc.o 00:01:46.813 CC module/sock/posix/posix.o 00:01:46.813 LIB libspdk_env_dpdk_rpc.a 00:01:46.813 SO libspdk_env_dpdk_rpc.so.6.0 00:01:46.813 LIB libspdk_scheduler_gscheduler.a 00:01:46.813 LIB libspdk_keyring_linux.a 00:01:46.813 SYMLINK libspdk_env_dpdk_rpc.so 00:01:46.813 SO libspdk_scheduler_gscheduler.so.4.0 00:01:46.813 LIB libspdk_scheduler_dynamic.a 00:01:46.813 SO libspdk_keyring_linux.so.1.0 00:01:47.072 LIB libspdk_keyring_file.a 00:01:47.072 SO libspdk_scheduler_dynamic.so.4.0 00:01:47.072 SYMLINK libspdk_scheduler_gscheduler.so 00:01:47.072 LIB libspdk_scheduler_dpdk_governor.a 00:01:47.072 SO libspdk_keyring_file.so.1.0 00:01:47.072 LIB libspdk_accel_error.a 00:01:47.072 SYMLINK libspdk_keyring_linux.so 00:01:47.072 LIB libspdk_blob_bdev.a 00:01:47.072 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:47.072 SYMLINK libspdk_scheduler_dynamic.so 00:01:47.072 SO libspdk_accel_error.so.2.0 00:01:47.072 SO libspdk_blob_bdev.so.11.0 00:01:47.072 LIB libspdk_accel_iaa.a 00:01:47.072 SYMLINK libspdk_keyring_file.so 00:01:47.072 LIB libspdk_accel_ioat.a 00:01:47.072 SO libspdk_accel_iaa.so.3.0 00:01:47.072 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:47.072 SO libspdk_accel_ioat.so.6.0 00:01:47.072 SYMLINK libspdk_blob_bdev.so 00:01:47.072 SYMLINK libspdk_accel_error.so 00:01:47.072 SYMLINK libspdk_accel_iaa.so 00:01:47.072 LIB libspdk_accel_dsa.a 00:01:47.072 SYMLINK libspdk_accel_ioat.so 00:01:47.072 SO libspdk_accel_dsa.so.5.0 00:01:47.340 SYMLINK libspdk_accel_dsa.so 00:01:47.340 CC module/bdev/error/vbdev_error.o 00:01:47.340 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:47.340 CC module/bdev/error/vbdev_error_rpc.o 00:01:47.340 CC module/bdev/nvme/bdev_nvme.o 00:01:47.340 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:47.340 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:47.340 CC module/bdev/nvme/bdev_mdns_client.o 00:01:47.340 CC module/bdev/nvme/nvme_rpc.o 00:01:47.340 CC module/bdev/nvme/vbdev_opal.o 00:01:47.340 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:47.340 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:47.340 CC module/bdev/aio/bdev_aio.o 00:01:47.340 CC module/bdev/passthru/vbdev_passthru.o 00:01:47.340 CC module/bdev/null/bdev_null.o 00:01:47.340 CC module/bdev/aio/bdev_aio_rpc.o 00:01:47.340 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:47.340 CC module/bdev/delay/vbdev_delay.o 00:01:47.340 CC module/bdev/null/bdev_null_rpc.o 00:01:47.340 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:47.340 CC module/blobfs/bdev/blobfs_bdev.o 00:01:47.340 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:47.340 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:47.340 CC module/bdev/iscsi/bdev_iscsi.o 00:01:47.340 CC module/bdev/ftl/bdev_ftl.o 00:01:47.340 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:47.340 CC module/bdev/malloc/bdev_malloc.o 00:01:47.340 CC module/bdev/raid/bdev_raid.o 00:01:47.340 CC module/bdev/gpt/gpt.o 00:01:47.340 CC module/bdev/lvol/vbdev_lvol.o 00:01:47.340 CC module/bdev/split/vbdev_split.o 00:01:47.340 LIB libspdk_vfu_device.a 00:01:47.599 SO libspdk_vfu_device.so.3.0 00:01:47.599 SYMLINK libspdk_vfu_device.so 00:01:47.599 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:47.599 CC module/bdev/gpt/vbdev_gpt.o 00:01:47.599 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:47.599 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:47.599 CC module/bdev/raid/bdev_raid_rpc.o 00:01:47.599 CC module/bdev/raid/bdev_raid_sb.o 00:01:47.599 CC module/bdev/raid/raid0.o 00:01:47.857 CC module/bdev/raid/raid1.o 00:01:47.857 CC module/bdev/raid/concat.o 00:01:47.857 CC module/bdev/split/vbdev_split_rpc.o 00:01:47.857 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:47.857 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:47.857 LIB libspdk_blobfs_bdev.a 00:01:47.857 SO libspdk_blobfs_bdev.so.6.0 00:01:47.857 LIB libspdk_sock_posix.a 00:01:47.857 LIB libspdk_bdev_null.a 00:01:47.857 SO libspdk_sock_posix.so.6.0 00:01:47.857 SYMLINK libspdk_blobfs_bdev.so 00:01:47.857 SO libspdk_bdev_null.so.6.0 00:01:47.857 LIB libspdk_bdev_error.a 00:01:47.857 LIB libspdk_bdev_aio.a 00:01:47.857 SO libspdk_bdev_error.so.6.0 00:01:47.857 SYMLINK libspdk_sock_posix.so 00:01:47.857 LIB libspdk_bdev_passthru.a 00:01:48.115 SO libspdk_bdev_aio.so.6.0 00:01:48.115 SO libspdk_bdev_passthru.so.6.0 00:01:48.115 SYMLINK libspdk_bdev_null.so 00:01:48.115 LIB libspdk_bdev_zone_block.a 00:01:48.115 SYMLINK libspdk_bdev_error.so 00:01:48.115 LIB libspdk_bdev_iscsi.a 00:01:48.115 LIB libspdk_bdev_ftl.a 00:01:48.115 LIB libspdk_bdev_delay.a 00:01:48.115 SO libspdk_bdev_zone_block.so.6.0 00:01:48.115 SO libspdk_bdev_ftl.so.6.0 00:01:48.115 SO libspdk_bdev_iscsi.so.6.0 00:01:48.115 LIB libspdk_bdev_split.a 00:01:48.115 SYMLINK libspdk_bdev_aio.so 00:01:48.115 SYMLINK libspdk_bdev_passthru.so 00:01:48.115 SO libspdk_bdev_delay.so.6.0 00:01:48.115 SO libspdk_bdev_split.so.6.0 00:01:48.115 LIB libspdk_bdev_malloc.a 00:01:48.115 SYMLINK libspdk_bdev_iscsi.so 00:01:48.115 SO libspdk_bdev_malloc.so.6.0 00:01:48.115 SYMLINK libspdk_bdev_ftl.so 00:01:48.115 SYMLINK libspdk_bdev_zone_block.so 00:01:48.115 SYMLINK libspdk_bdev_delay.so 00:01:48.115 LIB libspdk_bdev_gpt.a 00:01:48.115 SYMLINK libspdk_bdev_split.so 00:01:48.115 SO libspdk_bdev_gpt.so.6.0 00:01:48.115 SYMLINK libspdk_bdev_malloc.so 00:01:48.115 LIB libspdk_bdev_virtio.a 00:01:48.373 SYMLINK libspdk_bdev_gpt.so 00:01:48.373 SO libspdk_bdev_virtio.so.6.0 00:01:48.373 LIB libspdk_bdev_lvol.a 00:01:48.373 SYMLINK libspdk_bdev_virtio.so 00:01:48.373 SO libspdk_bdev_lvol.so.6.0 00:01:48.373 SYMLINK libspdk_bdev_lvol.so 00:01:48.631 LIB libspdk_bdev_raid.a 00:01:48.631 SO libspdk_bdev_raid.so.6.0 00:01:48.889 SYMLINK libspdk_bdev_raid.so 00:01:50.264 LIB libspdk_bdev_nvme.a 00:01:50.264 SO libspdk_bdev_nvme.so.7.0 00:01:50.264 SYMLINK libspdk_bdev_nvme.so 00:01:50.830 CC module/event/subsystems/iobuf/iobuf.o 00:01:50.830 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:50.830 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:50.830 CC module/event/subsystems/scheduler/scheduler.o 00:01:50.830 CC module/event/subsystems/keyring/keyring.o 00:01:50.830 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:50.830 CC module/event/subsystems/vmd/vmd.o 00:01:50.830 CC module/event/subsystems/sock/sock.o 00:01:50.830 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:50.830 LIB libspdk_event_keyring.a 00:01:50.830 LIB libspdk_event_vhost_blk.a 00:01:50.830 LIB libspdk_event_scheduler.a 00:01:50.830 LIB libspdk_event_vfu_tgt.a 00:01:50.830 LIB libspdk_event_sock.a 00:01:50.830 LIB libspdk_event_vmd.a 00:01:50.830 SO libspdk_event_keyring.so.1.0 00:01:50.830 LIB libspdk_event_iobuf.a 00:01:50.830 SO libspdk_event_vhost_blk.so.3.0 00:01:50.830 SO libspdk_event_sock.so.5.0 00:01:50.830 SO libspdk_event_scheduler.so.4.0 00:01:50.830 SO libspdk_event_vfu_tgt.so.3.0 00:01:50.830 SO libspdk_event_vmd.so.6.0 00:01:50.830 SO libspdk_event_iobuf.so.3.0 00:01:50.830 SYMLINK libspdk_event_keyring.so 00:01:50.830 SYMLINK libspdk_event_sock.so 00:01:50.830 SYMLINK libspdk_event_scheduler.so 00:01:50.830 SYMLINK libspdk_event_vhost_blk.so 00:01:50.830 SYMLINK libspdk_event_vfu_tgt.so 00:01:50.830 SYMLINK libspdk_event_vmd.so 00:01:51.088 SYMLINK libspdk_event_iobuf.so 00:01:51.088 CC module/event/subsystems/accel/accel.o 00:01:51.347 LIB libspdk_event_accel.a 00:01:51.347 SO libspdk_event_accel.so.6.0 00:01:51.347 SYMLINK libspdk_event_accel.so 00:01:51.605 CC module/event/subsystems/bdev/bdev.o 00:01:51.863 LIB libspdk_event_bdev.a 00:01:51.863 SO libspdk_event_bdev.so.6.0 00:01:51.863 SYMLINK libspdk_event_bdev.so 00:01:52.121 CC module/event/subsystems/nbd/nbd.o 00:01:52.121 CC module/event/subsystems/ublk/ublk.o 00:01:52.121 CC module/event/subsystems/scsi/scsi.o 00:01:52.121 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:52.121 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:52.121 LIB libspdk_event_nbd.a 00:01:52.121 LIB libspdk_event_ublk.a 00:01:52.121 LIB libspdk_event_scsi.a 00:01:52.121 SO libspdk_event_ublk.so.3.0 00:01:52.121 SO libspdk_event_nbd.so.6.0 00:01:52.121 SO libspdk_event_scsi.so.6.0 00:01:52.379 SYMLINK libspdk_event_nbd.so 00:01:52.379 SYMLINK libspdk_event_ublk.so 00:01:52.379 SYMLINK libspdk_event_scsi.so 00:01:52.379 LIB libspdk_event_nvmf.a 00:01:52.379 SO libspdk_event_nvmf.so.6.0 00:01:52.379 SYMLINK libspdk_event_nvmf.so 00:01:52.379 CC module/event/subsystems/iscsi/iscsi.o 00:01:52.379 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:52.638 LIB libspdk_event_vhost_scsi.a 00:01:52.638 LIB libspdk_event_iscsi.a 00:01:52.638 SO libspdk_event_vhost_scsi.so.3.0 00:01:52.638 SO libspdk_event_iscsi.so.6.0 00:01:52.638 SYMLINK libspdk_event_vhost_scsi.so 00:01:52.638 SYMLINK libspdk_event_iscsi.so 00:01:52.897 SO libspdk.so.6.0 00:01:52.897 SYMLINK libspdk.so 00:01:53.164 TEST_HEADER include/spdk/accel.h 00:01:53.164 TEST_HEADER include/spdk/accel_module.h 00:01:53.164 TEST_HEADER include/spdk/assert.h 00:01:53.164 CC app/spdk_nvme_identify/identify.o 00:01:53.164 TEST_HEADER include/spdk/barrier.h 00:01:53.164 TEST_HEADER include/spdk/base64.h 00:01:53.164 TEST_HEADER include/spdk/bdev.h 00:01:53.164 CC app/trace_record/trace_record.o 00:01:53.164 TEST_HEADER include/spdk/bdev_module.h 00:01:53.164 TEST_HEADER include/spdk/bdev_zone.h 00:01:53.164 TEST_HEADER include/spdk/bit_array.h 00:01:53.164 TEST_HEADER include/spdk/bit_pool.h 00:01:53.164 TEST_HEADER include/spdk/blob_bdev.h 00:01:53.164 CXX app/trace/trace.o 00:01:53.164 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:53.164 TEST_HEADER include/spdk/blobfs.h 00:01:53.164 TEST_HEADER include/spdk/blob.h 00:01:53.164 TEST_HEADER include/spdk/conf.h 00:01:53.164 TEST_HEADER include/spdk/config.h 00:01:53.164 CC app/spdk_nvme_discover/discovery_aer.o 00:01:53.164 TEST_HEADER include/spdk/cpuset.h 00:01:53.164 CC app/spdk_top/spdk_top.o 00:01:53.164 CC test/rpc_client/rpc_client_test.o 00:01:53.164 TEST_HEADER include/spdk/crc16.h 00:01:53.164 CC app/spdk_nvme_perf/perf.o 00:01:53.164 TEST_HEADER include/spdk/crc32.h 00:01:53.164 CC app/spdk_lspci/spdk_lspci.o 00:01:53.164 TEST_HEADER include/spdk/crc64.h 00:01:53.164 TEST_HEADER include/spdk/dif.h 00:01:53.164 TEST_HEADER include/spdk/dma.h 00:01:53.164 TEST_HEADER include/spdk/endian.h 00:01:53.164 TEST_HEADER include/spdk/env_dpdk.h 00:01:53.164 TEST_HEADER include/spdk/env.h 00:01:53.164 TEST_HEADER include/spdk/event.h 00:01:53.164 TEST_HEADER include/spdk/fd_group.h 00:01:53.164 TEST_HEADER include/spdk/fd.h 00:01:53.164 TEST_HEADER include/spdk/file.h 00:01:53.164 TEST_HEADER include/spdk/ftl.h 00:01:53.164 TEST_HEADER include/spdk/gpt_spec.h 00:01:53.164 TEST_HEADER include/spdk/hexlify.h 00:01:53.164 TEST_HEADER include/spdk/histogram_data.h 00:01:53.164 TEST_HEADER include/spdk/idxd.h 00:01:53.164 TEST_HEADER include/spdk/init.h 00:01:53.164 TEST_HEADER include/spdk/idxd_spec.h 00:01:53.164 TEST_HEADER include/spdk/ioat.h 00:01:53.164 TEST_HEADER include/spdk/ioat_spec.h 00:01:53.164 TEST_HEADER include/spdk/iscsi_spec.h 00:01:53.164 TEST_HEADER include/spdk/json.h 00:01:53.164 TEST_HEADER include/spdk/jsonrpc.h 00:01:53.164 TEST_HEADER include/spdk/keyring.h 00:01:53.164 TEST_HEADER include/spdk/likely.h 00:01:53.164 TEST_HEADER include/spdk/keyring_module.h 00:01:53.164 TEST_HEADER include/spdk/log.h 00:01:53.164 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:53.164 TEST_HEADER include/spdk/lvol.h 00:01:53.164 TEST_HEADER include/spdk/memory.h 00:01:53.164 TEST_HEADER include/spdk/mmio.h 00:01:53.164 TEST_HEADER include/spdk/nbd.h 00:01:53.164 TEST_HEADER include/spdk/net.h 00:01:53.164 TEST_HEADER include/spdk/notify.h 00:01:53.164 TEST_HEADER include/spdk/nvme.h 00:01:53.164 TEST_HEADER include/spdk/nvme_intel.h 00:01:53.164 CC app/spdk_dd/spdk_dd.o 00:01:53.164 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:53.164 CC app/iscsi_tgt/iscsi_tgt.o 00:01:53.164 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:53.164 TEST_HEADER include/spdk/nvme_spec.h 00:01:53.164 TEST_HEADER include/spdk/nvme_zns.h 00:01:53.164 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:53.164 TEST_HEADER include/spdk/nvmf.h 00:01:53.164 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:53.164 TEST_HEADER include/spdk/nvmf_spec.h 00:01:53.164 TEST_HEADER include/spdk/nvmf_transport.h 00:01:53.164 CC app/nvmf_tgt/nvmf_main.o 00:01:53.164 TEST_HEADER include/spdk/opal.h 00:01:53.164 TEST_HEADER include/spdk/opal_spec.h 00:01:53.164 TEST_HEADER include/spdk/pci_ids.h 00:01:53.164 TEST_HEADER include/spdk/pipe.h 00:01:53.164 CC examples/util/zipf/zipf.o 00:01:53.164 TEST_HEADER include/spdk/reduce.h 00:01:53.164 TEST_HEADER include/spdk/queue.h 00:01:53.164 TEST_HEADER include/spdk/rpc.h 00:01:53.164 TEST_HEADER include/spdk/scheduler.h 00:01:53.164 TEST_HEADER include/spdk/scsi.h 00:01:53.164 CC test/app/stub/stub.o 00:01:53.164 TEST_HEADER include/spdk/scsi_spec.h 00:01:53.164 TEST_HEADER include/spdk/sock.h 00:01:53.164 TEST_HEADER include/spdk/stdinc.h 00:01:53.164 CC test/app/histogram_perf/histogram_perf.o 00:01:53.164 CC test/app/jsoncat/jsoncat.o 00:01:53.164 TEST_HEADER include/spdk/string.h 00:01:53.164 CC examples/ioat/verify/verify.o 00:01:53.164 TEST_HEADER include/spdk/thread.h 00:01:53.164 CC test/env/vtophys/vtophys.o 00:01:53.164 CC test/env/memory/memory_ut.o 00:01:53.164 CC test/thread/poller_perf/poller_perf.o 00:01:53.164 TEST_HEADER include/spdk/trace.h 00:01:53.164 CC app/spdk_tgt/spdk_tgt.o 00:01:53.164 TEST_HEADER include/spdk/trace_parser.h 00:01:53.164 CC examples/ioat/perf/perf.o 00:01:53.164 TEST_HEADER include/spdk/tree.h 00:01:53.164 TEST_HEADER include/spdk/ublk.h 00:01:53.164 CC app/fio/nvme/fio_plugin.o 00:01:53.164 TEST_HEADER include/spdk/util.h 00:01:53.164 TEST_HEADER include/spdk/uuid.h 00:01:53.164 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:53.164 CC test/env/pci/pci_ut.o 00:01:53.164 TEST_HEADER include/spdk/version.h 00:01:53.164 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:53.164 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:53.164 TEST_HEADER include/spdk/vhost.h 00:01:53.164 TEST_HEADER include/spdk/vmd.h 00:01:53.426 TEST_HEADER include/spdk/xor.h 00:01:53.426 TEST_HEADER include/spdk/zipf.h 00:01:53.426 CXX test/cpp_headers/accel.o 00:01:53.426 CC test/dma/test_dma/test_dma.o 00:01:53.426 CC app/fio/bdev/fio_plugin.o 00:01:53.426 CC test/app/bdev_svc/bdev_svc.o 00:01:53.426 LINK spdk_lspci 00:01:53.426 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:53.426 CC test/env/mem_callbacks/mem_callbacks.o 00:01:53.426 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:53.426 LINK rpc_client_test 00:01:53.426 LINK jsoncat 00:01:53.426 LINK interrupt_tgt 00:01:53.426 LINK spdk_nvme_discover 00:01:53.426 LINK zipf 00:01:53.426 LINK histogram_perf 00:01:53.686 LINK vtophys 00:01:53.686 LINK nvmf_tgt 00:01:53.686 LINK iscsi_tgt 00:01:53.686 LINK poller_perf 00:01:53.686 LINK spdk_trace_record 00:01:53.686 LINK stub 00:01:53.686 LINK env_dpdk_post_init 00:01:53.686 LINK verify 00:01:53.686 CXX test/cpp_headers/accel_module.o 00:01:53.686 LINK bdev_svc 00:01:53.686 LINK ioat_perf 00:01:53.686 LINK spdk_tgt 00:01:53.686 CXX test/cpp_headers/assert.o 00:01:53.946 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:53.946 CXX test/cpp_headers/barrier.o 00:01:53.946 CXX test/cpp_headers/base64.o 00:01:53.946 CXX test/cpp_headers/bdev.o 00:01:53.946 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:53.946 LINK spdk_dd 00:01:53.946 CXX test/cpp_headers/bdev_module.o 00:01:53.946 CXX test/cpp_headers/bdev_zone.o 00:01:53.946 CXX test/cpp_headers/bit_array.o 00:01:53.946 LINK spdk_trace 00:01:53.946 CXX test/cpp_headers/bit_pool.o 00:01:53.946 CXX test/cpp_headers/blob_bdev.o 00:01:53.946 CXX test/cpp_headers/blobfs_bdev.o 00:01:53.946 LINK test_dma 00:01:54.209 LINK pci_ut 00:01:54.209 CXX test/cpp_headers/blobfs.o 00:01:54.209 CC test/event/event_perf/event_perf.o 00:01:54.209 LINK nvme_fuzz 00:01:54.209 CXX test/cpp_headers/blob.o 00:01:54.209 CC examples/sock/hello_world/hello_sock.o 00:01:54.209 CC test/event/reactor/reactor.o 00:01:54.209 CXX test/cpp_headers/conf.o 00:01:54.209 LINK spdk_nvme 00:01:54.209 CC test/event/reactor_perf/reactor_perf.o 00:01:54.209 CC examples/thread/thread/thread_ex.o 00:01:54.209 CXX test/cpp_headers/config.o 00:01:54.209 CXX test/cpp_headers/cpuset.o 00:01:54.209 CC examples/vmd/lsvmd/lsvmd.o 00:01:54.209 LINK spdk_bdev 00:01:54.209 CC examples/idxd/perf/perf.o 00:01:54.209 CXX test/cpp_headers/crc16.o 00:01:54.209 CC test/event/app_repeat/app_repeat.o 00:01:54.209 CXX test/cpp_headers/crc32.o 00:01:54.467 CC examples/vmd/led/led.o 00:01:54.467 CXX test/cpp_headers/crc64.o 00:01:54.467 CXX test/cpp_headers/dif.o 00:01:54.467 CC test/event/scheduler/scheduler.o 00:01:54.467 LINK event_perf 00:01:54.467 CXX test/cpp_headers/dma.o 00:01:54.467 CXX test/cpp_headers/endian.o 00:01:54.467 LINK reactor 00:01:54.467 CC app/vhost/vhost.o 00:01:54.467 CXX test/cpp_headers/env_dpdk.o 00:01:54.467 CXX test/cpp_headers/env.o 00:01:54.467 LINK reactor_perf 00:01:54.467 LINK lsvmd 00:01:54.467 LINK spdk_nvme_identify 00:01:54.731 CXX test/cpp_headers/event.o 00:01:54.731 LINK app_repeat 00:01:54.731 LINK led 00:01:54.731 LINK hello_sock 00:01:54.731 LINK mem_callbacks 00:01:54.731 LINK vhost_fuzz 00:01:54.731 LINK spdk_nvme_perf 00:01:54.731 LINK thread 00:01:54.731 CC test/nvme/aer/aer.o 00:01:54.731 CXX test/cpp_headers/fd_group.o 00:01:54.731 CXX test/cpp_headers/fd.o 00:01:54.731 CC test/nvme/reset/reset.o 00:01:54.731 CXX test/cpp_headers/file.o 00:01:54.731 CC test/blobfs/mkfs/mkfs.o 00:01:54.731 CC test/accel/dif/dif.o 00:01:54.992 LINK scheduler 00:01:54.992 CC test/nvme/sgl/sgl.o 00:01:54.992 CC test/nvme/e2edp/nvme_dp.o 00:01:54.992 CC test/nvme/overhead/overhead.o 00:01:54.992 LINK spdk_top 00:01:54.992 CC test/lvol/esnap/esnap.o 00:01:54.992 LINK vhost 00:01:54.992 CXX test/cpp_headers/ftl.o 00:01:54.992 CC test/nvme/err_injection/err_injection.o 00:01:54.992 LINK idxd_perf 00:01:54.992 CXX test/cpp_headers/gpt_spec.o 00:01:54.992 CC test/nvme/startup/startup.o 00:01:54.992 CC test/nvme/reserve/reserve.o 00:01:54.992 CXX test/cpp_headers/hexlify.o 00:01:54.992 CXX test/cpp_headers/histogram_data.o 00:01:54.992 CC test/nvme/simple_copy/simple_copy.o 00:01:54.992 CC test/nvme/connect_stress/connect_stress.o 00:01:54.992 CC test/nvme/boot_partition/boot_partition.o 00:01:55.256 CC test/nvme/compliance/nvme_compliance.o 00:01:55.256 CXX test/cpp_headers/idxd.o 00:01:55.256 CC test/nvme/fused_ordering/fused_ordering.o 00:01:55.256 CXX test/cpp_headers/idxd_spec.o 00:01:55.256 CXX test/cpp_headers/init.o 00:01:55.256 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:55.256 CXX test/cpp_headers/ioat.o 00:01:55.256 LINK mkfs 00:01:55.256 LINK aer 00:01:55.256 CC examples/nvme/hello_world/hello_world.o 00:01:55.256 LINK reset 00:01:55.256 LINK err_injection 00:01:55.256 CC test/nvme/fdp/fdp.o 00:01:55.256 CC test/nvme/cuse/cuse.o 00:01:55.256 LINK startup 00:01:55.524 LINK nvme_dp 00:01:55.525 LINK sgl 00:01:55.525 LINK connect_stress 00:01:55.525 LINK boot_partition 00:01:55.525 LINK reserve 00:01:55.525 LINK overhead 00:01:55.525 CC examples/nvme/reconnect/reconnect.o 00:01:55.525 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:55.525 CC examples/accel/perf/accel_perf.o 00:01:55.525 CC examples/nvme/arbitration/arbitration.o 00:01:55.525 CC examples/nvme/hotplug/hotplug.o 00:01:55.525 CXX test/cpp_headers/ioat_spec.o 00:01:55.525 LINK simple_copy 00:01:55.525 CXX test/cpp_headers/iscsi_spec.o 00:01:55.525 LINK fused_ordering 00:01:55.525 CXX test/cpp_headers/json.o 00:01:55.525 LINK doorbell_aers 00:01:55.525 LINK memory_ut 00:01:55.789 CXX test/cpp_headers/jsonrpc.o 00:01:55.789 LINK dif 00:01:55.789 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:55.789 CXX test/cpp_headers/keyring.o 00:01:55.789 CXX test/cpp_headers/keyring_module.o 00:01:55.789 CC examples/nvme/abort/abort.o 00:01:55.789 CXX test/cpp_headers/likely.o 00:01:55.789 CC examples/blob/hello_world/hello_blob.o 00:01:55.789 CXX test/cpp_headers/log.o 00:01:55.789 CXX test/cpp_headers/lvol.o 00:01:55.789 CXX test/cpp_headers/memory.o 00:01:55.789 LINK hello_world 00:01:55.789 CXX test/cpp_headers/mmio.o 00:01:55.789 LINK nvme_compliance 00:01:55.789 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:55.789 CC examples/blob/cli/blobcli.o 00:01:55.789 CXX test/cpp_headers/nbd.o 00:01:55.789 CXX test/cpp_headers/net.o 00:01:56.051 CXX test/cpp_headers/notify.o 00:01:56.051 CXX test/cpp_headers/nvme.o 00:01:56.051 CXX test/cpp_headers/nvme_intel.o 00:01:56.051 LINK fdp 00:01:56.051 CXX test/cpp_headers/nvme_ocssd.o 00:01:56.051 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:56.051 CXX test/cpp_headers/nvme_spec.o 00:01:56.051 LINK hotplug 00:01:56.051 CXX test/cpp_headers/nvme_zns.o 00:01:56.051 CXX test/cpp_headers/nvmf_cmd.o 00:01:56.051 LINK cmb_copy 00:01:56.051 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:56.051 CXX test/cpp_headers/nvmf.o 00:01:56.051 CXX test/cpp_headers/nvmf_spec.o 00:01:56.051 CXX test/cpp_headers/nvmf_transport.o 00:01:56.051 CXX test/cpp_headers/opal.o 00:01:56.051 CXX test/cpp_headers/opal_spec.o 00:01:56.051 CXX test/cpp_headers/pci_ids.o 00:01:56.051 LINK reconnect 00:01:56.051 LINK arbitration 00:01:56.051 CXX test/cpp_headers/pipe.o 00:01:56.313 LINK pmr_persistence 00:01:56.313 CXX test/cpp_headers/queue.o 00:01:56.313 CXX test/cpp_headers/reduce.o 00:01:56.313 LINK hello_blob 00:01:56.313 CXX test/cpp_headers/rpc.o 00:01:56.313 CXX test/cpp_headers/scheduler.o 00:01:56.313 CXX test/cpp_headers/scsi.o 00:01:56.313 CXX test/cpp_headers/scsi_spec.o 00:01:56.313 CXX test/cpp_headers/sock.o 00:01:56.313 LINK nvme_manage 00:01:56.313 CXX test/cpp_headers/stdinc.o 00:01:56.313 CXX test/cpp_headers/string.o 00:01:56.313 CXX test/cpp_headers/thread.o 00:01:56.572 CXX test/cpp_headers/trace.o 00:01:56.572 CXX test/cpp_headers/trace_parser.o 00:01:56.572 LINK accel_perf 00:01:56.572 CXX test/cpp_headers/tree.o 00:01:56.572 CC test/bdev/bdevio/bdevio.o 00:01:56.572 CXX test/cpp_headers/ublk.o 00:01:56.572 CXX test/cpp_headers/util.o 00:01:56.572 CXX test/cpp_headers/uuid.o 00:01:56.572 CXX test/cpp_headers/version.o 00:01:56.572 CXX test/cpp_headers/vfio_user_pci.o 00:01:56.572 LINK abort 00:01:56.572 CXX test/cpp_headers/vfio_user_spec.o 00:01:56.572 CXX test/cpp_headers/vhost.o 00:01:56.572 CXX test/cpp_headers/vmd.o 00:01:56.573 CXX test/cpp_headers/xor.o 00:01:56.573 CXX test/cpp_headers/zipf.o 00:01:56.831 LINK blobcli 00:01:56.831 LINK iscsi_fuzz 00:01:57.089 CC examples/bdev/hello_world/hello_bdev.o 00:01:57.089 CC examples/bdev/bdevperf/bdevperf.o 00:01:57.089 LINK bdevio 00:01:57.346 LINK hello_bdev 00:01:57.604 LINK cuse 00:01:57.604 LINK bdevperf 00:01:58.167 CC examples/nvmf/nvmf/nvmf.o 00:01:58.424 LINK nvmf 00:02:00.390 LINK esnap 00:02:00.959 00:02:00.959 real 0m56.104s 00:02:00.959 user 11m2.677s 00:02:00.959 sys 2m21.124s 00:02:00.959 18:59:06 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:00.959 18:59:06 make -- common/autotest_common.sh@10 -- $ set +x 00:02:00.959 ************************************ 00:02:00.959 END TEST make 00:02:00.959 ************************************ 00:02:00.959 18:59:06 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:00.959 18:59:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:00.959 18:59:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:00.959 18:59:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.959 18:59:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:00.959 18:59:06 -- pm/common@44 -- $ pid=2386982 00:02:00.959 18:59:06 -- pm/common@50 -- $ kill -TERM 2386982 00:02:00.959 18:59:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.959 18:59:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:00.959 18:59:06 -- pm/common@44 -- $ pid=2386984 00:02:00.959 18:59:06 -- pm/common@50 -- $ kill -TERM 2386984 00:02:00.959 18:59:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.959 18:59:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:00.959 18:59:06 -- pm/common@44 -- $ pid=2386986 00:02:00.959 18:59:06 -- pm/common@50 -- $ kill -TERM 2386986 00:02:00.959 18:59:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.959 18:59:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:00.959 18:59:06 -- pm/common@44 -- $ pid=2387014 00:02:00.959 18:59:06 -- pm/common@50 -- $ sudo -E kill -TERM 2387014 00:02:00.959 18:59:06 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:00.959 18:59:06 -- nvmf/common.sh@7 -- # uname -s 00:02:00.959 18:59:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:00.959 18:59:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:00.959 18:59:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:00.959 18:59:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:00.959 18:59:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:00.959 18:59:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:00.959 18:59:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:00.959 18:59:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:00.959 18:59:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:00.959 18:59:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:00.959 18:59:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:02:00.959 18:59:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:02:00.959 18:59:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:00.959 18:59:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:00.959 18:59:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:00.959 18:59:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:00.959 18:59:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:00.959 18:59:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:00.959 18:59:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:00.959 18:59:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:00.959 18:59:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.959 18:59:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.959 18:59:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.959 18:59:06 -- paths/export.sh@5 -- # export PATH 00:02:00.959 18:59:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.959 18:59:06 -- nvmf/common.sh@47 -- # : 0 00:02:00.959 18:59:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:00.959 18:59:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:00.959 18:59:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:00.959 18:59:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:00.959 18:59:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:00.959 18:59:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:00.959 18:59:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:00.959 18:59:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:00.959 18:59:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:00.959 18:59:06 -- spdk/autotest.sh@32 -- # uname -s 00:02:00.959 18:59:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:00.959 18:59:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:00.959 18:59:06 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:00.959 18:59:06 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:00.959 18:59:06 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:00.959 18:59:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:00.959 18:59:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:00.959 18:59:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:00.959 18:59:06 -- spdk/autotest.sh@48 -- # udevadm_pid=2441284 00:02:00.959 18:59:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:00.959 18:59:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:00.959 18:59:06 -- pm/common@17 -- # local monitor 00:02:00.959 18:59:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.959 18:59:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.959 18:59:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.959 18:59:06 -- pm/common@21 -- # date +%s 00:02:00.959 18:59:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.959 18:59:06 -- pm/common@21 -- # date +%s 00:02:00.959 18:59:06 -- pm/common@25 -- # sleep 1 00:02:00.959 18:59:06 -- pm/common@21 -- # date +%s 00:02:00.959 18:59:06 -- pm/common@21 -- # date +%s 00:02:00.960 18:59:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721840346 00:02:00.960 18:59:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721840346 00:02:00.960 18:59:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721840346 00:02:00.960 18:59:06 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721840346 00:02:00.960 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721840346_collect-vmstat.pm.log 00:02:00.960 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721840346_collect-cpu-load.pm.log 00:02:00.960 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721840346_collect-cpu-temp.pm.log 00:02:00.960 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721840346_collect-bmc-pm.bmc.pm.log 00:02:01.896 18:59:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:01.896 18:59:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:01.896 18:59:07 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:01.896 18:59:07 -- common/autotest_common.sh@10 -- # set +x 00:02:01.896 18:59:07 -- spdk/autotest.sh@59 -- # create_test_list 00:02:01.896 18:59:07 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:01.896 18:59:07 -- common/autotest_common.sh@10 -- # set +x 00:02:01.896 18:59:07 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:01.896 18:59:07 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:02.155 18:59:07 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:02.155 18:59:07 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:02.155 18:59:07 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:02.155 18:59:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:02.155 18:59:07 -- common/autotest_common.sh@1455 -- # uname 00:02:02.155 18:59:07 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:02.155 18:59:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:02.155 18:59:07 -- common/autotest_common.sh@1475 -- # uname 00:02:02.155 18:59:07 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:02.155 18:59:07 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:02.155 18:59:07 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:02.155 18:59:07 -- spdk/autotest.sh@72 -- # hash lcov 00:02:02.155 18:59:07 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:02.155 18:59:07 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:02.155 --rc lcov_branch_coverage=1 00:02:02.155 --rc lcov_function_coverage=1 00:02:02.155 --rc genhtml_branch_coverage=1 00:02:02.155 --rc genhtml_function_coverage=1 00:02:02.155 --rc genhtml_legend=1 00:02:02.155 --rc geninfo_all_blocks=1 00:02:02.155 ' 00:02:02.155 18:59:07 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:02.155 --rc lcov_branch_coverage=1 00:02:02.155 --rc lcov_function_coverage=1 00:02:02.155 --rc genhtml_branch_coverage=1 00:02:02.155 --rc genhtml_function_coverage=1 00:02:02.155 --rc genhtml_legend=1 00:02:02.155 --rc geninfo_all_blocks=1 00:02:02.155 ' 00:02:02.155 18:59:07 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:02.155 --rc lcov_branch_coverage=1 00:02:02.155 --rc lcov_function_coverage=1 00:02:02.155 --rc genhtml_branch_coverage=1 00:02:02.155 --rc genhtml_function_coverage=1 00:02:02.155 --rc genhtml_legend=1 00:02:02.155 --rc geninfo_all_blocks=1 00:02:02.155 --no-external' 00:02:02.155 18:59:07 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:02.155 --rc lcov_branch_coverage=1 00:02:02.155 --rc lcov_function_coverage=1 00:02:02.155 --rc genhtml_branch_coverage=1 00:02:02.155 --rc genhtml_function_coverage=1 00:02:02.155 --rc genhtml_legend=1 00:02:02.155 --rc geninfo_all_blocks=1 00:02:02.155 --no-external' 00:02:02.155 18:59:07 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:02.155 lcov: LCOV version 1.14 00:02:02.155 18:59:08 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:20.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:20.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:32.424 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:32.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:32.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:32.425 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:36.606 18:59:42 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:36.606 18:59:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:36.606 18:59:42 -- common/autotest_common.sh@10 -- # set +x 00:02:36.606 18:59:42 -- spdk/autotest.sh@91 -- # rm -f 00:02:36.606 18:59:42 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:37.552 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:02:37.552 0000:00:04.7 (8086 3c27): Already using the ioatdma driver 00:02:37.552 0000:00:04.6 (8086 3c26): Already using the ioatdma driver 00:02:37.811 0000:00:04.5 (8086 3c25): Already using the ioatdma driver 00:02:37.811 0000:00:04.4 (8086 3c24): Already using the ioatdma driver 00:02:37.811 0000:00:04.3 (8086 3c23): Already using the ioatdma driver 00:02:37.811 0000:00:04.2 (8086 3c22): Already using the ioatdma driver 00:02:37.811 0000:00:04.1 (8086 3c21): Already using the ioatdma driver 00:02:37.811 0000:00:04.0 (8086 3c20): Already using the ioatdma driver 00:02:37.811 0000:80:04.7 (8086 3c27): Already using the ioatdma driver 00:02:37.811 0000:80:04.6 (8086 3c26): Already using the ioatdma driver 00:02:37.811 0000:80:04.5 (8086 3c25): Already using the ioatdma driver 00:02:37.811 0000:80:04.4 (8086 3c24): Already using the ioatdma driver 00:02:37.811 0000:80:04.3 (8086 3c23): Already using the ioatdma driver 00:02:37.811 0000:80:04.2 (8086 3c22): Already using the ioatdma driver 00:02:37.811 0000:80:04.1 (8086 3c21): Already using the ioatdma driver 00:02:37.811 0000:80:04.0 (8086 3c20): Already using the ioatdma driver 00:02:37.811 18:59:43 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:37.811 18:59:43 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:37.811 18:59:43 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:37.811 18:59:43 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:37.811 18:59:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:37.811 18:59:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:37.811 18:59:43 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:37.811 18:59:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:37.811 18:59:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:37.811 18:59:43 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:37.811 18:59:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:37.811 18:59:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:37.811 18:59:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:37.811 18:59:43 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:37.811 18:59:43 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:38.071 No valid GPT data, bailing 00:02:38.071 18:59:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:38.071 18:59:43 -- scripts/common.sh@391 -- # pt= 00:02:38.071 18:59:43 -- scripts/common.sh@392 -- # return 1 00:02:38.071 18:59:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:38.071 1+0 records in 00:02:38.071 1+0 records out 00:02:38.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00218859 s, 479 MB/s 00:02:38.071 18:59:43 -- spdk/autotest.sh@118 -- # sync 00:02:38.071 18:59:43 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:38.071 18:59:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:38.071 18:59:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:39.977 18:59:45 -- spdk/autotest.sh@124 -- # uname -s 00:02:39.977 18:59:45 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:39.977 18:59:45 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:39.977 18:59:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:39.977 18:59:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:39.977 18:59:45 -- common/autotest_common.sh@10 -- # set +x 00:02:39.977 ************************************ 00:02:39.977 START TEST setup.sh 00:02:39.977 ************************************ 00:02:39.977 18:59:45 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:39.977 * Looking for test storage... 00:02:39.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:39.977 18:59:45 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:39.977 18:59:45 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:39.977 18:59:45 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:39.977 18:59:45 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:39.977 18:59:45 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:39.977 18:59:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:39.977 ************************************ 00:02:39.977 START TEST acl 00:02:39.977 ************************************ 00:02:39.977 18:59:45 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:39.977 * Looking for test storage... 00:02:39.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:39.977 18:59:45 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:39.977 18:59:45 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:39.977 18:59:45 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:39.977 18:59:45 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:39.977 18:59:45 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:39.977 18:59:45 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:39.977 18:59:45 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:39.977 18:59:45 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:39.977 18:59:45 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:39.977 18:59:45 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:39.977 18:59:45 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:39.977 18:59:45 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:39.977 18:59:45 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:39.977 18:59:45 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:39.977 18:59:45 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:39.977 18:59:45 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:40.914 18:59:46 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:40.914 18:59:46 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:40.914 18:59:46 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:40.914 18:59:46 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:40.914 18:59:46 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:40.914 18:59:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.858 Hugepages 00:02:41.858 node hugesize free / total 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.858 00:02:41.858 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.858 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.859 18:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.859 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:41.859 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.859 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.859 18:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.859 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:41.859 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.859 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.859 18:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.859 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:41.859 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.859 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.859 18:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:42.118 18:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:84:00.0 == *:*:*.* ]] 00:02:42.118 18:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:42.118 18:59:47 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:02:42.118 18:59:47 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:42.118 18:59:47 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:42.118 18:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:42.118 18:59:47 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:42.118 18:59:47 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:42.118 18:59:47 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:42.118 18:59:47 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:42.118 18:59:47 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:42.118 ************************************ 00:02:42.118 START TEST denied 00:02:42.118 ************************************ 00:02:42.118 18:59:47 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:02:42.118 18:59:47 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:84:00.0' 00:02:42.118 18:59:47 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:42.118 18:59:47 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:42.118 18:59:47 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:42.118 18:59:47 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:84:00.0' 00:02:43.496 0000:84:00.0 (8086 0a54): Skipping denied controller at 0000:84:00.0 00:02:43.496 18:59:49 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:84:00.0 00:02:43.496 18:59:49 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:43.496 18:59:49 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:43.497 18:59:49 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:84:00.0 ]] 00:02:43.497 18:59:49 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:84:00.0/driver 00:02:43.497 18:59:49 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:43.497 18:59:49 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:43.497 18:59:49 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:43.497 18:59:49 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:43.497 18:59:49 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:46.035 00:02:46.035 real 0m3.449s 00:02:46.035 user 0m1.040s 00:02:46.035 sys 0m1.622s 00:02:46.035 18:59:51 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:46.035 18:59:51 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:46.035 ************************************ 00:02:46.035 END TEST denied 00:02:46.035 ************************************ 00:02:46.035 18:59:51 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:46.035 18:59:51 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:46.035 18:59:51 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:46.035 18:59:51 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:46.035 ************************************ 00:02:46.035 START TEST allowed 00:02:46.035 ************************************ 00:02:46.035 18:59:51 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:02:46.035 18:59:51 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:84:00.0 00:02:46.035 18:59:51 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:46.035 18:59:51 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:46.035 18:59:51 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:84:00.0 .*: nvme -> .*' 00:02:46.035 18:59:51 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:47.945 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:02:47.945 18:59:53 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:47.945 18:59:53 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:47.945 18:59:53 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:47.945 18:59:53 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:47.945 18:59:53 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:49.331 00:02:49.331 real 0m3.431s 00:02:49.331 user 0m0.937s 00:02:49.331 sys 0m1.505s 00:02:49.331 18:59:54 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:49.331 18:59:54 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:49.331 ************************************ 00:02:49.331 END TEST allowed 00:02:49.331 ************************************ 00:02:49.331 00:02:49.331 real 0m9.298s 00:02:49.331 user 0m2.987s 00:02:49.331 sys 0m4.655s 00:02:49.331 18:59:54 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:49.331 18:59:54 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:49.331 ************************************ 00:02:49.331 END TEST acl 00:02:49.331 ************************************ 00:02:49.331 18:59:54 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:49.331 18:59:54 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:49.331 18:59:54 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:49.331 18:59:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:49.331 ************************************ 00:02:49.331 START TEST hugepages 00:02:49.331 ************************************ 00:02:49.331 18:59:54 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:49.331 * Looking for test storage... 00:02:49.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 31146348 kB' 'MemAvailable: 35111100 kB' 'Buffers: 2704 kB' 'Cached: 14622180 kB' 'SwapCached: 0 kB' 'Active: 11461464 kB' 'Inactive: 3701476 kB' 'Active(anon): 10996676 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541308 kB' 'Mapped: 168888 kB' 'Shmem: 10458620 kB' 'KReclaimable: 410152 kB' 'Slab: 704952 kB' 'SReclaimable: 410152 kB' 'SUnreclaim: 294800 kB' 'KernelStack: 9984 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32437040 kB' 'Committed_AS: 12000912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190288 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.331 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:49.332 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:49.333 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.333 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:49.333 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.333 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:49.333 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:49.333 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:49.333 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:49.333 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:49.333 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:49.333 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:49.333 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:49.333 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:49.333 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:49.333 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:49.333 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:49.333 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:49.333 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:49.333 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:49.333 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:49.333 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:49.333 18:59:55 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:49.333 18:59:55 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:49.333 18:59:55 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:49.333 18:59:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:49.333 ************************************ 00:02:49.333 START TEST default_setup 00:02:49.333 ************************************ 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.333 18:59:55 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:50.338 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:02:50.338 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:02:50.338 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:02:50.338 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:02:50.338 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:02:50.338 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:02:50.338 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:02:50.338 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:02:50.338 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:02:50.338 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:02:50.338 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:02:50.338 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:02:50.338 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:02:50.338 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:02:50.338 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:02:50.338 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:02:51.283 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33269536 kB' 'MemAvailable: 37234280 kB' 'Buffers: 2704 kB' 'Cached: 14622264 kB' 'SwapCached: 0 kB' 'Active: 11480504 kB' 'Inactive: 3701476 kB' 'Active(anon): 11015716 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559696 kB' 'Mapped: 168912 kB' 'Shmem: 10458704 kB' 'KReclaimable: 410144 kB' 'Slab: 704884 kB' 'SReclaimable: 410144 kB' 'SUnreclaim: 294740 kB' 'KernelStack: 10016 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12020564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190560 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33269768 kB' 'MemAvailable: 37234512 kB' 'Buffers: 2704 kB' 'Cached: 14622268 kB' 'SwapCached: 0 kB' 'Active: 11480284 kB' 'Inactive: 3701476 kB' 'Active(anon): 11015496 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559464 kB' 'Mapped: 168912 kB' 'Shmem: 10458708 kB' 'KReclaimable: 410144 kB' 'Slab: 704884 kB' 'SReclaimable: 410144 kB' 'SUnreclaim: 294740 kB' 'KernelStack: 10032 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12018224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190400 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33270032 kB' 'MemAvailable: 37234776 kB' 'Buffers: 2704 kB' 'Cached: 14622284 kB' 'SwapCached: 0 kB' 'Active: 11479000 kB' 'Inactive: 3701476 kB' 'Active(anon): 11014212 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558588 kB' 'Mapped: 168880 kB' 'Shmem: 10458724 kB' 'KReclaimable: 410144 kB' 'Slab: 704692 kB' 'SReclaimable: 410144 kB' 'SUnreclaim: 294548 kB' 'KernelStack: 9744 kB' 'PageTables: 7308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12018244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190352 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.287 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:51.288 nr_hugepages=1024 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:51.288 resv_hugepages=0 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:51.288 surplus_hugepages=0 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:51.288 anon_hugepages=0 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.288 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33270032 kB' 'MemAvailable: 37234776 kB' 'Buffers: 2704 kB' 'Cached: 14622312 kB' 'SwapCached: 0 kB' 'Active: 11479268 kB' 'Inactive: 3701476 kB' 'Active(anon): 11014480 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558920 kB' 'Mapped: 168876 kB' 'Shmem: 10458752 kB' 'KReclaimable: 410144 kB' 'Slab: 704700 kB' 'SReclaimable: 410144 kB' 'SUnreclaim: 294556 kB' 'KernelStack: 10032 kB' 'PageTables: 7760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12018268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190352 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.289 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 19555800 kB' 'MemUsed: 13325948 kB' 'SwapCached: 0 kB' 'Active: 6832116 kB' 'Inactive: 3397596 kB' 'Active(anon): 6620708 kB' 'Inactive(anon): 0 kB' 'Active(file): 211408 kB' 'Inactive(file): 3397596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9919380 kB' 'Mapped: 89480 kB' 'AnonPages: 313440 kB' 'Shmem: 6310376 kB' 'KernelStack: 5672 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 270388 kB' 'Slab: 419992 kB' 'SReclaimable: 270388 kB' 'SUnreclaim: 149604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.291 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.550 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.550 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.550 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.550 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.550 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.550 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.550 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.550 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.550 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.550 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.550 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.550 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.550 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.550 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.550 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.550 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.550 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:51.551 node0=1024 expecting 1024 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:51.551 00:02:51.551 real 0m2.205s 00:02:51.551 user 0m0.619s 00:02:51.551 sys 0m0.750s 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:51.551 18:59:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:51.551 ************************************ 00:02:51.551 END TEST default_setup 00:02:51.551 ************************************ 00:02:51.551 18:59:57 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:51.551 18:59:57 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:51.551 18:59:57 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:51.551 18:59:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:51.551 ************************************ 00:02:51.551 START TEST per_node_1G_alloc 00:02:51.551 ************************************ 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.551 18:59:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:52.497 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:52.497 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:52.497 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:52.497 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:52.497 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:52.497 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:52.497 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:52.497 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:52.497 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:52.497 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:52.497 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:52.497 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:52.497 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:52.497 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:52.497 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:52.497 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:52.497 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:52.497 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:52.497 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:52.497 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:52.497 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:52.497 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33258072 kB' 'MemAvailable: 37222816 kB' 'Buffers: 2704 kB' 'Cached: 14622392 kB' 'SwapCached: 0 kB' 'Active: 11479836 kB' 'Inactive: 3701476 kB' 'Active(anon): 11015048 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559368 kB' 'Mapped: 168784 kB' 'Shmem: 10458832 kB' 'KReclaimable: 410144 kB' 'Slab: 704708 kB' 'SReclaimable: 410144 kB' 'SUnreclaim: 294564 kB' 'KernelStack: 9968 kB' 'PageTables: 7428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12018616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190352 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.498 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33258268 kB' 'MemAvailable: 37223012 kB' 'Buffers: 2704 kB' 'Cached: 14622392 kB' 'SwapCached: 0 kB' 'Active: 11479496 kB' 'Inactive: 3701476 kB' 'Active(anon): 11014708 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559092 kB' 'Mapped: 169000 kB' 'Shmem: 10458832 kB' 'KReclaimable: 410144 kB' 'Slab: 704716 kB' 'SReclaimable: 410144 kB' 'SUnreclaim: 294572 kB' 'KernelStack: 9968 kB' 'PageTables: 7316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12018636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190336 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.499 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.500 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33257512 kB' 'MemAvailable: 37222256 kB' 'Buffers: 2704 kB' 'Cached: 14622412 kB' 'SwapCached: 0 kB' 'Active: 11479448 kB' 'Inactive: 3701476 kB' 'Active(anon): 11014660 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559004 kB' 'Mapped: 168892 kB' 'Shmem: 10458852 kB' 'KReclaimable: 410144 kB' 'Slab: 704740 kB' 'SReclaimable: 410144 kB' 'SUnreclaim: 294596 kB' 'KernelStack: 10032 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12018660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190336 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:52.501 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.502 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:52.503 nr_hugepages=1024 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:52.503 resv_hugepages=0 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:52.503 surplus_hugepages=0 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:52.503 anon_hugepages=0 00:02:52.503 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33260884 kB' 'MemAvailable: 37225628 kB' 'Buffers: 2704 kB' 'Cached: 14622456 kB' 'SwapCached: 0 kB' 'Active: 11479100 kB' 'Inactive: 3701476 kB' 'Active(anon): 11014312 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558584 kB' 'Mapped: 168892 kB' 'Shmem: 10458896 kB' 'KReclaimable: 410144 kB' 'Slab: 704740 kB' 'SReclaimable: 410144 kB' 'SUnreclaim: 294596 kB' 'KernelStack: 10016 kB' 'PageTables: 7708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12018680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190336 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.504 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:52.505 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 20595068 kB' 'MemUsed: 12286680 kB' 'SwapCached: 0 kB' 'Active: 6832384 kB' 'Inactive: 3397596 kB' 'Active(anon): 6620976 kB' 'Inactive(anon): 0 kB' 'Active(file): 211408 kB' 'Inactive(file): 3397596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9919392 kB' 'Mapped: 89496 kB' 'AnonPages: 313704 kB' 'Shmem: 6310388 kB' 'KernelStack: 5704 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 270388 kB' 'Slab: 420008 kB' 'SReclaimable: 270388 kB' 'SUnreclaim: 149620 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.506 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19409432 kB' 'MemFree: 12665312 kB' 'MemUsed: 6744120 kB' 'SwapCached: 0 kB' 'Active: 4647160 kB' 'Inactive: 303880 kB' 'Active(anon): 4393780 kB' 'Inactive(anon): 0 kB' 'Active(file): 253380 kB' 'Inactive(file): 303880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4705792 kB' 'Mapped: 79396 kB' 'AnonPages: 245308 kB' 'Shmem: 4148532 kB' 'KernelStack: 4328 kB' 'PageTables: 3260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139756 kB' 'Slab: 284732 kB' 'SReclaimable: 139756 kB' 'SUnreclaim: 144976 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.507 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.508 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:52.509 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.509 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.509 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.509 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.509 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:52.509 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:52.509 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:52.509 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:52.509 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:52.509 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:52.509 node0=512 expecting 512 00:02:52.509 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:52.509 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:52.509 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:52.509 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:52.509 node1=512 expecting 512 00:02:52.509 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:52.509 00:02:52.509 real 0m1.124s 00:02:52.509 user 0m0.509s 00:02:52.509 sys 0m0.645s 00:02:52.509 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:52.509 18:59:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:52.509 ************************************ 00:02:52.509 END TEST per_node_1G_alloc 00:02:52.509 ************************************ 00:02:52.509 18:59:58 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:52.770 18:59:58 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:52.770 18:59:58 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:52.770 18:59:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:52.770 ************************************ 00:02:52.770 START TEST even_2G_alloc 00:02:52.770 ************************************ 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.770 18:59:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:53.722 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:53.722 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:53.722 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:53.722 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:53.722 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:53.722 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:53.722 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:53.722 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:53.722 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:53.722 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:53.722 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:53.722 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:53.722 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:53.722 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:53.722 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:53.722 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:53.722 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:53.722 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:53.722 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:53.722 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:53.722 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:53.722 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:53.722 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:53.722 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:53.722 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:53.722 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:53.722 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:53.722 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:53.722 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33262504 kB' 'MemAvailable: 37227248 kB' 'Buffers: 2704 kB' 'Cached: 14622524 kB' 'SwapCached: 0 kB' 'Active: 11480484 kB' 'Inactive: 3701476 kB' 'Active(anon): 11015696 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559916 kB' 'Mapped: 168916 kB' 'Shmem: 10458964 kB' 'KReclaimable: 410144 kB' 'Slab: 704880 kB' 'SReclaimable: 410144 kB' 'SUnreclaim: 294736 kB' 'KernelStack: 10016 kB' 'PageTables: 7736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12018752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190336 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.723 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33263968 kB' 'MemAvailable: 37228712 kB' 'Buffers: 2704 kB' 'Cached: 14622528 kB' 'SwapCached: 0 kB' 'Active: 11479980 kB' 'Inactive: 3701476 kB' 'Active(anon): 11015192 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559424 kB' 'Mapped: 168976 kB' 'Shmem: 10458968 kB' 'KReclaimable: 410144 kB' 'Slab: 704860 kB' 'SReclaimable: 410144 kB' 'SUnreclaim: 294716 kB' 'KernelStack: 10048 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12018768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190288 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.724 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.725 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33264108 kB' 'MemAvailable: 37228852 kB' 'Buffers: 2704 kB' 'Cached: 14622556 kB' 'SwapCached: 0 kB' 'Active: 11479660 kB' 'Inactive: 3701476 kB' 'Active(anon): 11014872 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559076 kB' 'Mapped: 168900 kB' 'Shmem: 10458996 kB' 'KReclaimable: 410144 kB' 'Slab: 704848 kB' 'SReclaimable: 410144 kB' 'SUnreclaim: 294704 kB' 'KernelStack: 10032 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12018788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190288 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.726 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.727 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:53.728 nr_hugepages=1024 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:53.728 resv_hugepages=0 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:53.728 surplus_hugepages=0 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:53.728 anon_hugepages=0 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33264108 kB' 'MemAvailable: 37228852 kB' 'Buffers: 2704 kB' 'Cached: 14622568 kB' 'SwapCached: 0 kB' 'Active: 11479680 kB' 'Inactive: 3701476 kB' 'Active(anon): 11014892 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559060 kB' 'Mapped: 168900 kB' 'Shmem: 10459008 kB' 'KReclaimable: 410144 kB' 'Slab: 704848 kB' 'SReclaimable: 410144 kB' 'SUnreclaim: 294704 kB' 'KernelStack: 10032 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12018812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190288 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.728 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.729 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 20594736 kB' 'MemUsed: 12287012 kB' 'SwapCached: 0 kB' 'Active: 6832220 kB' 'Inactive: 3397596 kB' 'Active(anon): 6620812 kB' 'Inactive(anon): 0 kB' 'Active(file): 211408 kB' 'Inactive(file): 3397596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9919388 kB' 'Mapped: 89504 kB' 'AnonPages: 313512 kB' 'Shmem: 6310384 kB' 'KernelStack: 5672 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 270388 kB' 'Slab: 420168 kB' 'SReclaimable: 270388 kB' 'SUnreclaim: 149780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.730 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:53.731 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:53.991 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:53.991 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.991 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:53.991 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:53.991 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.991 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.991 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:53.991 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:53.991 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.991 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.991 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.991 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19409432 kB' 'MemFree: 12668896 kB' 'MemUsed: 6740536 kB' 'SwapCached: 0 kB' 'Active: 4647452 kB' 'Inactive: 303880 kB' 'Active(anon): 4394072 kB' 'Inactive(anon): 0 kB' 'Active(file): 253380 kB' 'Inactive(file): 303880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4705904 kB' 'Mapped: 79396 kB' 'AnonPages: 245116 kB' 'Shmem: 4148644 kB' 'KernelStack: 4392 kB' 'PageTables: 3264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139756 kB' 'Slab: 284680 kB' 'SReclaimable: 139756 kB' 'SUnreclaim: 144924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.992 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:53.993 node0=512 expecting 512 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:53.993 node1=512 expecting 512 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:53.993 00:02:53.993 real 0m1.224s 00:02:53.993 user 0m0.545s 00:02:53.993 sys 0m0.711s 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:53.993 18:59:59 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:53.993 ************************************ 00:02:53.993 END TEST even_2G_alloc 00:02:53.993 ************************************ 00:02:53.993 18:59:59 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:53.993 18:59:59 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:53.993 18:59:59 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:53.993 18:59:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:53.993 ************************************ 00:02:53.993 START TEST odd_alloc 00:02:53.993 ************************************ 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.993 18:59:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:54.936 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:54.936 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:54.936 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:54.936 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:54.936 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:54.936 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:54.936 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:54.936 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:54.936 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:54.936 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:54.936 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:54.936 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:54.936 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:54.936 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:54.936 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:54.936 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:54.936 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33234764 kB' 'MemAvailable: 37199508 kB' 'Buffers: 2704 kB' 'Cached: 14622652 kB' 'SwapCached: 0 kB' 'Active: 11488264 kB' 'Inactive: 3701476 kB' 'Active(anon): 11023476 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567644 kB' 'Mapped: 169820 kB' 'Shmem: 10459092 kB' 'KReclaimable: 410144 kB' 'Slab: 704760 kB' 'SReclaimable: 410144 kB' 'SUnreclaim: 294616 kB' 'KernelStack: 10144 kB' 'PageTables: 8148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12029648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190436 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.936 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.937 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33235308 kB' 'MemAvailable: 37200052 kB' 'Buffers: 2704 kB' 'Cached: 14622656 kB' 'SwapCached: 0 kB' 'Active: 11487872 kB' 'Inactive: 3701476 kB' 'Active(anon): 11023084 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567172 kB' 'Mapped: 169740 kB' 'Shmem: 10459096 kB' 'KReclaimable: 410144 kB' 'Slab: 704784 kB' 'SReclaimable: 410144 kB' 'SUnreclaim: 294640 kB' 'KernelStack: 10144 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12029664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190372 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.938 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.939 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33235308 kB' 'MemAvailable: 37200052 kB' 'Buffers: 2704 kB' 'Cached: 14622672 kB' 'SwapCached: 0 kB' 'Active: 11482400 kB' 'Inactive: 3701476 kB' 'Active(anon): 11017612 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561836 kB' 'Mapped: 169304 kB' 'Shmem: 10459112 kB' 'KReclaimable: 410144 kB' 'Slab: 704772 kB' 'SReclaimable: 410144 kB' 'SUnreclaim: 294628 kB' 'KernelStack: 10176 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12022620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190368 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.940 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.941 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.941 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.941 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.941 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.941 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.941 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.942 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:54.943 nr_hugepages=1025 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:54.943 resv_hugepages=0 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:54.943 surplus_hugepages=0 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:54.943 anon_hugepages=0 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33231528 kB' 'MemAvailable: 37196272 kB' 'Buffers: 2704 kB' 'Cached: 14622692 kB' 'SwapCached: 0 kB' 'Active: 11486128 kB' 'Inactive: 3701476 kB' 'Active(anon): 11021340 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565444 kB' 'Mapped: 169356 kB' 'Shmem: 10459132 kB' 'KReclaimable: 410144 kB' 'Slab: 704772 kB' 'SReclaimable: 410144 kB' 'SUnreclaim: 294628 kB' 'KernelStack: 10160 kB' 'PageTables: 8132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12027712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190352 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.943 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.944 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:54.945 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 20559632 kB' 'MemUsed: 12322116 kB' 'SwapCached: 0 kB' 'Active: 6839568 kB' 'Inactive: 3397596 kB' 'Active(anon): 6628160 kB' 'Inactive(anon): 0 kB' 'Active(file): 211408 kB' 'Inactive(file): 3397596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9919452 kB' 'Mapped: 90224 kB' 'AnonPages: 320880 kB' 'Shmem: 6310448 kB' 'KernelStack: 5784 kB' 'PageTables: 4788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 270388 kB' 'Slab: 420008 kB' 'SReclaimable: 270388 kB' 'SUnreclaim: 149620 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.206 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19409432 kB' 'MemFree: 12667864 kB' 'MemUsed: 6741568 kB' 'SwapCached: 0 kB' 'Active: 4648180 kB' 'Inactive: 303880 kB' 'Active(anon): 4394800 kB' 'Inactive(anon): 0 kB' 'Active(file): 253380 kB' 'Inactive(file): 303880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4705984 kB' 'Mapped: 78948 kB' 'AnonPages: 246244 kB' 'Shmem: 4148724 kB' 'KernelStack: 4360 kB' 'PageTables: 3384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139756 kB' 'Slab: 284752 kB' 'SReclaimable: 139756 kB' 'SUnreclaim: 144996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.207 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.208 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:55.209 node0=512 expecting 513 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:55.209 node1=513 expecting 512 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:55.209 00:02:55.209 real 0m1.182s 00:02:55.209 user 0m0.531s 00:02:55.209 sys 0m0.685s 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:55.209 19:00:00 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:55.209 ************************************ 00:02:55.209 END TEST odd_alloc 00:02:55.209 ************************************ 00:02:55.209 19:00:01 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:55.209 19:00:01 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:55.209 19:00:01 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:55.209 19:00:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:55.209 ************************************ 00:02:55.209 START TEST custom_alloc 00:02:55.209 ************************************ 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.209 19:00:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:56.147 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:56.147 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:56.147 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:56.147 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:56.147 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:56.147 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:56.147 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:56.147 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:56.147 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:56.147 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:56.147 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:56.147 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:56.147 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:56.147 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:56.147 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:56.147 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:56.147 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32147168 kB' 'MemAvailable: 36111912 kB' 'Buffers: 2704 kB' 'Cached: 14622780 kB' 'SwapCached: 0 kB' 'Active: 11484856 kB' 'Inactive: 3701476 kB' 'Active(anon): 11020068 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564056 kB' 'Mapped: 168668 kB' 'Shmem: 10459220 kB' 'KReclaimable: 410144 kB' 'Slab: 704524 kB' 'SReclaimable: 410144 kB' 'SUnreclaim: 294380 kB' 'KernelStack: 10048 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12018264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190420 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.147 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32145420 kB' 'MemAvailable: 36110164 kB' 'Buffers: 2704 kB' 'Cached: 14622784 kB' 'SwapCached: 0 kB' 'Active: 11481408 kB' 'Inactive: 3701476 kB' 'Active(anon): 11016620 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560680 kB' 'Mapped: 168604 kB' 'Shmem: 10459224 kB' 'KReclaimable: 410144 kB' 'Slab: 704524 kB' 'SReclaimable: 410144 kB' 'SUnreclaim: 294380 kB' 'KernelStack: 10096 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12014444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190384 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.148 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32145396 kB' 'MemAvailable: 36110140 kB' 'Buffers: 2704 kB' 'Cached: 14622800 kB' 'SwapCached: 0 kB' 'Active: 11485256 kB' 'Inactive: 3701476 kB' 'Active(anon): 11020468 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564436 kB' 'Mapped: 168604 kB' 'Shmem: 10459240 kB' 'KReclaimable: 410144 kB' 'Slab: 704556 kB' 'SReclaimable: 410144 kB' 'SUnreclaim: 294412 kB' 'KernelStack: 10080 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12018304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190372 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.149 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:56.150 nr_hugepages=1536 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:56.150 resv_hugepages=0 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:56.150 surplus_hugepages=0 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:56.150 anon_hugepages=0 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32145672 kB' 'MemAvailable: 36110416 kB' 'Buffers: 2704 kB' 'Cached: 14622824 kB' 'SwapCached: 0 kB' 'Active: 11485296 kB' 'Inactive: 3701476 kB' 'Active(anon): 11020508 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564452 kB' 'Mapped: 168748 kB' 'Shmem: 10459264 kB' 'KReclaimable: 410144 kB' 'Slab: 704524 kB' 'SReclaimable: 410144 kB' 'SUnreclaim: 294380 kB' 'KernelStack: 10096 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12018324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190356 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.150 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 20559968 kB' 'MemUsed: 12321780 kB' 'SwapCached: 0 kB' 'Active: 6832244 kB' 'Inactive: 3397596 kB' 'Active(anon): 6620836 kB' 'Inactive(anon): 0 kB' 'Active(file): 211408 kB' 'Inactive(file): 3397596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9919528 kB' 'Mapped: 89800 kB' 'AnonPages: 313428 kB' 'Shmem: 6310524 kB' 'KernelStack: 5720 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 270388 kB' 'Slab: 419808 kB' 'SReclaimable: 270388 kB' 'SUnreclaim: 149420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.151 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19409432 kB' 'MemFree: 11581172 kB' 'MemUsed: 7828260 kB' 'SwapCached: 0 kB' 'Active: 4651484 kB' 'Inactive: 303880 kB' 'Active(anon): 4398104 kB' 'Inactive(anon): 0 kB' 'Active(file): 253380 kB' 'Inactive(file): 303880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4706020 kB' 'Mapped: 78796 kB' 'AnonPages: 249392 kB' 'Shmem: 4148760 kB' 'KernelStack: 4392 kB' 'PageTables: 3296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139756 kB' 'Slab: 284716 kB' 'SReclaimable: 139756 kB' 'SUnreclaim: 144960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.152 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:56.420 node0=512 expecting 512 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.420 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.421 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:56.421 node1=1024 expecting 1024 00:02:56.421 19:00:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:56.421 00:02:56.421 real 0m1.132s 00:02:56.421 user 0m0.523s 00:02:56.421 sys 0m0.641s 00:02:56.421 19:00:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:56.421 19:00:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:56.421 ************************************ 00:02:56.421 END TEST custom_alloc 00:02:56.421 ************************************ 00:02:56.421 19:00:02 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:56.421 19:00:02 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:56.421 19:00:02 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:56.421 19:00:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:56.421 ************************************ 00:02:56.421 START TEST no_shrink_alloc 00:02:56.421 ************************************ 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.421 19:00:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:57.360 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:57.360 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:57.360 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:57.360 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:57.360 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:57.360 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:57.360 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:57.360 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:57.360 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:57.360 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:57.360 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:57.360 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:57.360 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:57.360 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:57.360 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:57.360 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:57.360 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:57.360 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:57.360 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:57.360 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:57.360 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:57.360 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:57.360 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:57.360 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:57.360 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:57.360 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:57.360 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:57.360 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:57.360 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:57.360 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.360 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.360 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.360 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.360 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.360 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.360 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.360 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33128784 kB' 'MemAvailable: 37093560 kB' 'Buffers: 2704 kB' 'Cached: 14622904 kB' 'SwapCached: 0 kB' 'Active: 11485404 kB' 'Inactive: 3701476 kB' 'Active(anon): 11020616 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564388 kB' 'Mapped: 168680 kB' 'Shmem: 10459344 kB' 'KReclaimable: 410176 kB' 'Slab: 704584 kB' 'SReclaimable: 410176 kB' 'SUnreclaim: 294408 kB' 'KernelStack: 10048 kB' 'PageTables: 7740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12017044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190452 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.361 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33129036 kB' 'MemAvailable: 37093812 kB' 'Buffers: 2704 kB' 'Cached: 14622904 kB' 'SwapCached: 0 kB' 'Active: 11485396 kB' 'Inactive: 3701476 kB' 'Active(anon): 11020608 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564424 kB' 'Mapped: 168692 kB' 'Shmem: 10459344 kB' 'KReclaimable: 410176 kB' 'Slab: 704608 kB' 'SReclaimable: 410176 kB' 'SUnreclaim: 294432 kB' 'KernelStack: 10016 kB' 'PageTables: 7644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12017060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190404 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.362 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.363 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33128332 kB' 'MemAvailable: 37093108 kB' 'Buffers: 2704 kB' 'Cached: 14622924 kB' 'SwapCached: 0 kB' 'Active: 11485224 kB' 'Inactive: 3701476 kB' 'Active(anon): 11020436 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564224 kB' 'Mapped: 168616 kB' 'Shmem: 10459364 kB' 'KReclaimable: 410176 kB' 'Slab: 704608 kB' 'SReclaimable: 410176 kB' 'SUnreclaim: 294432 kB' 'KernelStack: 10064 kB' 'PageTables: 7784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12017084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190404 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.364 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.365 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.366 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.366 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.366 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.366 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.366 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.366 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.366 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.366 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.366 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.366 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.366 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.366 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.366 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.366 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.627 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.627 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.627 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.627 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.627 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.627 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.627 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.627 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:57.628 nr_hugepages=1024 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:57.628 resv_hugepages=0 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:57.628 surplus_hugepages=0 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:57.628 anon_hugepages=0 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33128020 kB' 'MemAvailable: 37092796 kB' 'Buffers: 2704 kB' 'Cached: 14622944 kB' 'SwapCached: 0 kB' 'Active: 11485528 kB' 'Inactive: 3701476 kB' 'Active(anon): 11020740 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564500 kB' 'Mapped: 168616 kB' 'Shmem: 10459384 kB' 'KReclaimable: 410176 kB' 'Slab: 704608 kB' 'SReclaimable: 410176 kB' 'SUnreclaim: 294432 kB' 'KernelStack: 10064 kB' 'PageTables: 7784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12017108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190404 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.628 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:57.629 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 19490844 kB' 'MemUsed: 13390904 kB' 'SwapCached: 0 kB' 'Active: 6837764 kB' 'Inactive: 3397596 kB' 'Active(anon): 6626356 kB' 'Inactive(anon): 0 kB' 'Active(file): 211408 kB' 'Inactive(file): 3397596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9919592 kB' 'Mapped: 89668 kB' 'AnonPages: 318844 kB' 'Shmem: 6310588 kB' 'KernelStack: 5672 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 270388 kB' 'Slab: 419784 kB' 'SReclaimable: 270388 kB' 'SUnreclaim: 149396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.630 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.631 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:57.631 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:57.631 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:57.631 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:57.631 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:57.631 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:57.631 node0=1024 expecting 1024 00:02:57.631 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:57.631 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:57.631 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:57.631 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:02:57.631 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.631 19:00:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:58.568 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:58.568 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:58.568 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:58.568 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:58.568 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:58.568 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:58.568 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:58.568 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:58.568 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:58.568 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:58.568 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:58.568 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:58.568 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:58.568 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:58.568 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:58.568 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:58.568 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:58.568 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33138120 kB' 'MemAvailable: 37102896 kB' 'Buffers: 2704 kB' 'Cached: 14623172 kB' 'SwapCached: 0 kB' 'Active: 11485344 kB' 'Inactive: 3701476 kB' 'Active(anon): 11020556 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564060 kB' 'Mapped: 168764 kB' 'Shmem: 10459612 kB' 'KReclaimable: 410176 kB' 'Slab: 704764 kB' 'SReclaimable: 410176 kB' 'SUnreclaim: 294588 kB' 'KernelStack: 10080 kB' 'PageTables: 7788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12017516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190468 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.568 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.569 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33138200 kB' 'MemAvailable: 37102976 kB' 'Buffers: 2704 kB' 'Cached: 14623172 kB' 'SwapCached: 0 kB' 'Active: 11485972 kB' 'Inactive: 3701476 kB' 'Active(anon): 11021184 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564772 kB' 'Mapped: 168660 kB' 'Shmem: 10459612 kB' 'KReclaimable: 410176 kB' 'Slab: 704804 kB' 'SReclaimable: 410176 kB' 'SUnreclaim: 294628 kB' 'KernelStack: 10016 kB' 'PageTables: 7596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12017168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190372 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.570 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.571 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33138200 kB' 'MemAvailable: 37102976 kB' 'Buffers: 2704 kB' 'Cached: 14623196 kB' 'SwapCached: 0 kB' 'Active: 11485396 kB' 'Inactive: 3701476 kB' 'Active(anon): 11020608 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564116 kB' 'Mapped: 168640 kB' 'Shmem: 10459636 kB' 'KReclaimable: 410176 kB' 'Slab: 704796 kB' 'SReclaimable: 410176 kB' 'SUnreclaim: 294620 kB' 'KernelStack: 10048 kB' 'PageTables: 7676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12017192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190372 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.572 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.573 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:58.574 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:58.833 nr_hugepages=1024 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:58.833 resv_hugepages=0 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:58.833 surplus_hugepages=0 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:58.833 anon_hugepages=0 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33138888 kB' 'MemAvailable: 37103664 kB' 'Buffers: 2704 kB' 'Cached: 14623224 kB' 'SwapCached: 0 kB' 'Active: 11485812 kB' 'Inactive: 3701476 kB' 'Active(anon): 11021024 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564624 kB' 'Mapped: 168624 kB' 'Shmem: 10459664 kB' 'KReclaimable: 410176 kB' 'Slab: 704796 kB' 'SReclaimable: 410176 kB' 'SUnreclaim: 294620 kB' 'KernelStack: 10112 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12017580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190388 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3340580 kB' 'DirectMap2M: 30136320 kB' 'DirectMap1G: 27262976 kB' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.833 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 19488616 kB' 'MemUsed: 13393132 kB' 'SwapCached: 0 kB' 'Active: 6837896 kB' 'Inactive: 3397596 kB' 'Active(anon): 6626488 kB' 'Inactive(anon): 0 kB' 'Active(file): 211408 kB' 'Inactive(file): 3397596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9919604 kB' 'Mapped: 89736 kB' 'AnonPages: 319000 kB' 'Shmem: 6310600 kB' 'KernelStack: 5704 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 270388 kB' 'Slab: 419784 kB' 'SReclaimable: 270388 kB' 'SUnreclaim: 149396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:58.834 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:58.835 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:58.835 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.835 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:58.835 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:58.835 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:58.835 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:58.835 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:58.835 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:58.835 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:58.835 node0=1024 expecting 1024 00:02:58.835 19:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:58.835 00:02:58.835 real 0m2.407s 00:02:58.835 user 0m1.063s 00:02:58.835 sys 0m1.411s 00:02:58.835 19:00:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:58.835 19:00:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:58.835 ************************************ 00:02:58.835 END TEST no_shrink_alloc 00:02:58.835 ************************************ 00:02:58.835 19:00:04 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:02:58.835 19:00:04 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:58.835 19:00:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:58.835 19:00:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:58.835 19:00:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:58.835 19:00:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:58.835 19:00:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:58.835 19:00:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:58.835 19:00:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:58.835 19:00:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:58.835 19:00:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:58.835 19:00:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:58.835 19:00:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:58.835 19:00:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:58.835 00:02:58.835 real 0m9.703s 00:02:58.835 user 0m3.970s 00:02:58.835 sys 0m5.108s 00:02:58.835 19:00:04 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:58.835 19:00:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:58.835 ************************************ 00:02:58.835 END TEST hugepages 00:02:58.835 ************************************ 00:02:58.835 19:00:04 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:58.835 19:00:04 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:58.835 19:00:04 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:58.835 19:00:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:58.835 ************************************ 00:02:58.835 START TEST driver 00:02:58.835 ************************************ 00:02:58.835 19:00:04 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:58.835 * Looking for test storage... 00:02:58.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:58.835 19:00:04 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:02:58.835 19:00:04 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:58.835 19:00:04 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:01.382 19:00:06 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:01.382 19:00:06 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:01.382 19:00:06 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:01.382 19:00:06 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:01.382 ************************************ 00:03:01.382 START TEST guess_driver 00:03:01.382 ************************************ 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 102 > 0 )) 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:01.382 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:01.382 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:01.382 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:01.382 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:01.382 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:01.382 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:01.382 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:01.382 Looking for driver=vfio-pci 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.382 19:00:07 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:02.316 19:00:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:03.253 19:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:03.253 19:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:03.253 19:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:03.253 19:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:03.253 19:00:09 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:03.253 19:00:09 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:03.253 19:00:09 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.787 00:03:05.787 real 0m4.248s 00:03:05.787 user 0m0.941s 00:03:05.787 sys 0m1.597s 00:03:05.787 19:00:11 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:05.787 19:00:11 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:05.787 ************************************ 00:03:05.787 END TEST guess_driver 00:03:05.787 ************************************ 00:03:05.787 00:03:05.787 real 0m6.563s 00:03:05.787 user 0m1.440s 00:03:05.787 sys 0m2.534s 00:03:05.787 19:00:11 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:05.787 19:00:11 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:05.787 ************************************ 00:03:05.787 END TEST driver 00:03:05.787 ************************************ 00:03:05.787 19:00:11 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:05.787 19:00:11 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:05.787 19:00:11 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:05.787 19:00:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:05.787 ************************************ 00:03:05.787 START TEST devices 00:03:05.787 ************************************ 00:03:05.787 19:00:11 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:05.787 * Looking for test storage... 00:03:05.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:05.787 19:00:11 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:05.787 19:00:11 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:05.787 19:00:11 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:05.787 19:00:11 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.721 19:00:12 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:06.721 19:00:12 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:06.721 19:00:12 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:06.721 19:00:12 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:06.721 19:00:12 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:06.721 19:00:12 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:06.721 19:00:12 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:06.721 19:00:12 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:06.721 19:00:12 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:06.721 19:00:12 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:06.721 19:00:12 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:06.721 19:00:12 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:06.721 19:00:12 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:06.721 19:00:12 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:06.721 19:00:12 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:06.721 19:00:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:06.721 19:00:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:06.721 19:00:12 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:84:00.0 00:03:06.721 19:00:12 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:03:06.721 19:00:12 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:06.721 19:00:12 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:06.721 19:00:12 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:06.721 No valid GPT data, bailing 00:03:06.721 19:00:12 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:06.721 19:00:12 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:06.721 19:00:12 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:06.721 19:00:12 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:06.721 19:00:12 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:06.721 19:00:12 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:06.721 19:00:12 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:06.721 19:00:12 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:06.721 19:00:12 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:06.721 19:00:12 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:84:00.0 00:03:06.721 19:00:12 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:06.721 19:00:12 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:06.721 19:00:12 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:06.721 19:00:12 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:06.721 19:00:12 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:06.721 19:00:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:06.721 ************************************ 00:03:06.721 START TEST nvme_mount 00:03:06.721 ************************************ 00:03:06.721 19:00:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:03:06.721 19:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:06.721 19:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:06.721 19:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.721 19:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:06.721 19:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:06.721 19:00:12 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:06.721 19:00:12 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:06.721 19:00:12 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:06.721 19:00:12 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:06.721 19:00:12 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:06.721 19:00:12 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:06.721 19:00:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:06.721 19:00:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:06.721 19:00:12 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:06.721 19:00:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:06.721 19:00:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:06.721 19:00:12 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:06.721 19:00:12 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:06.721 19:00:12 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:08.098 Creating new GPT entries in memory. 00:03:08.098 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:08.098 other utilities. 00:03:08.098 19:00:13 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:08.098 19:00:13 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:08.098 19:00:13 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:08.098 19:00:13 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:08.098 19:00:13 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:09.034 Creating new GPT entries in memory. 00:03:09.034 The operation has completed successfully. 00:03:09.034 19:00:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:09.034 19:00:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:09.034 19:00:14 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2457653 00:03:09.034 19:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.034 19:00:14 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:09.034 19:00:14 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.034 19:00:14 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:09.034 19:00:14 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:09.035 19:00:14 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.035 19:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:84:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:09.035 19:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:09.035 19:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:09.035 19:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.035 19:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:09.035 19:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:09.035 19:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:09.035 19:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:09.035 19:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:09.035 19:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.035 19:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:09.035 19:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:09.035 19:00:14 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.035 19:00:14 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:09.972 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:09.972 19:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:10.230 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:10.230 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:10.230 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:10.230 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:10.230 19:00:16 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:10.230 19:00:16 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:10.230 19:00:16 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:10.230 19:00:16 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:10.230 19:00:16 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:10.230 19:00:16 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:10.230 19:00:16 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:84:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:10.230 19:00:16 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:10.230 19:00:16 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:10.230 19:00:16 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:10.230 19:00:16 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:10.230 19:00:16 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:10.230 19:00:16 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:10.230 19:00:16 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:10.230 19:00:16 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:10.230 19:00:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.230 19:00:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:10.230 19:00:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:10.230 19:00:16 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.230 19:00:16 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.168 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.428 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:11.428 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:11.428 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:11.428 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:11.428 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:11.428 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:11.428 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:84:00.0 data@nvme0n1 '' '' 00:03:11.428 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:11.428 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:11.428 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:11.428 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:11.428 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:11.428 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:11.428 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:11.428 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.428 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:11.428 19:00:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:11.428 19:00:17 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.428 19:00:17 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:12.379 19:00:18 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:12.637 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:12.637 00:03:12.637 real 0m5.681s 00:03:12.637 user 0m1.307s 00:03:12.637 sys 0m2.091s 00:03:12.637 19:00:18 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:12.637 19:00:18 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:12.637 ************************************ 00:03:12.637 END TEST nvme_mount 00:03:12.637 ************************************ 00:03:12.637 19:00:18 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:12.637 19:00:18 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:12.637 19:00:18 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:12.637 19:00:18 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:12.637 ************************************ 00:03:12.637 START TEST dm_mount 00:03:12.637 ************************************ 00:03:12.637 19:00:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:03:12.637 19:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:12.637 19:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:12.637 19:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:12.637 19:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:12.638 19:00:18 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:12.638 19:00:18 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:12.638 19:00:18 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:12.638 19:00:18 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:12.638 19:00:18 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:12.638 19:00:18 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:12.638 19:00:18 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:12.638 19:00:18 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:12.638 19:00:18 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:12.638 19:00:18 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:12.638 19:00:18 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:12.638 19:00:18 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:12.638 19:00:18 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:12.638 19:00:18 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:12.638 19:00:18 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:12.638 19:00:18 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:12.638 19:00:18 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:13.574 Creating new GPT entries in memory. 00:03:13.574 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:13.574 other utilities. 00:03:13.574 19:00:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:13.574 19:00:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:13.574 19:00:19 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:13.574 19:00:19 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:13.574 19:00:19 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:14.524 Creating new GPT entries in memory. 00:03:14.524 The operation has completed successfully. 00:03:14.524 19:00:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:14.524 19:00:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:14.524 19:00:20 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:14.524 19:00:20 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:14.524 19:00:20 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:15.904 The operation has completed successfully. 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2459429 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:84:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.904 19:00:21 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:16.843 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:84:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.844 19:00:22 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:17.793 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:17.793 00:03:17.793 real 0m5.291s 00:03:17.793 user 0m0.827s 00:03:17.793 sys 0m1.396s 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:17.793 19:00:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:17.793 ************************************ 00:03:17.793 END TEST dm_mount 00:03:17.793 ************************************ 00:03:17.793 19:00:23 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:17.793 19:00:23 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:17.793 19:00:23 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:17.793 19:00:23 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:17.793 19:00:23 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:17.793 19:00:23 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:17.793 19:00:23 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:18.052 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:18.052 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:18.052 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:18.052 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:18.052 19:00:24 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:18.052 19:00:24 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:18.052 19:00:24 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:18.052 19:00:24 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:18.052 19:00:24 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:18.052 19:00:24 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:18.052 19:00:24 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:18.052 00:03:18.052 real 0m12.689s 00:03:18.052 user 0m2.714s 00:03:18.052 sys 0m4.443s 00:03:18.052 19:00:24 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:18.052 19:00:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:18.052 ************************************ 00:03:18.052 END TEST devices 00:03:18.052 ************************************ 00:03:18.052 00:03:18.052 real 0m38.519s 00:03:18.052 user 0m11.213s 00:03:18.052 sys 0m16.919s 00:03:18.052 19:00:24 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:18.052 19:00:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:18.052 ************************************ 00:03:18.052 END TEST setup.sh 00:03:18.052 ************************************ 00:03:18.311 19:00:24 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:19.250 Hugepages 00:03:19.250 node hugesize free / total 00:03:19.250 node0 1048576kB 0 / 0 00:03:19.250 node0 2048kB 2048 / 2048 00:03:19.250 node1 1048576kB 0 / 0 00:03:19.250 node1 2048kB 0 / 0 00:03:19.250 00:03:19.250 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:19.250 I/OAT 0000:00:04.0 8086 3c20 0 ioatdma - - 00:03:19.250 I/OAT 0000:00:04.1 8086 3c21 0 ioatdma - - 00:03:19.250 I/OAT 0000:00:04.2 8086 3c22 0 ioatdma - - 00:03:19.250 I/OAT 0000:00:04.3 8086 3c23 0 ioatdma - - 00:03:19.250 I/OAT 0000:00:04.4 8086 3c24 0 ioatdma - - 00:03:19.250 I/OAT 0000:00:04.5 8086 3c25 0 ioatdma - - 00:03:19.250 I/OAT 0000:00:04.6 8086 3c26 0 ioatdma - - 00:03:19.250 I/OAT 0000:00:04.7 8086 3c27 0 ioatdma - - 00:03:19.250 I/OAT 0000:80:04.0 8086 3c20 1 ioatdma - - 00:03:19.250 I/OAT 0000:80:04.1 8086 3c21 1 ioatdma - - 00:03:19.250 I/OAT 0000:80:04.2 8086 3c22 1 ioatdma - - 00:03:19.250 I/OAT 0000:80:04.3 8086 3c23 1 ioatdma - - 00:03:19.250 I/OAT 0000:80:04.4 8086 3c24 1 ioatdma - - 00:03:19.250 I/OAT 0000:80:04.5 8086 3c25 1 ioatdma - - 00:03:19.250 I/OAT 0000:80:04.6 8086 3c26 1 ioatdma - - 00:03:19.250 I/OAT 0000:80:04.7 8086 3c27 1 ioatdma - - 00:03:19.250 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:19.250 19:00:25 -- spdk/autotest.sh@130 -- # uname -s 00:03:19.250 19:00:25 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:19.250 19:00:25 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:19.250 19:00:25 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.626 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:20.626 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:20.626 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:20.626 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:20.626 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:20.626 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:20.626 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:20.626 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:20.626 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:20.626 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:20.626 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:20.626 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:20.626 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:20.626 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:20.626 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:20.626 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:21.562 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:03:21.562 19:00:27 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:22.500 19:00:28 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:22.500 19:00:28 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:22.500 19:00:28 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:22.500 19:00:28 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:22.500 19:00:28 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:22.500 19:00:28 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:22.500 19:00:28 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:22.500 19:00:28 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:22.500 19:00:28 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:22.500 19:00:28 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:22.500 19:00:28 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:84:00.0 00:03:22.500 19:00:28 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:23.441 Waiting for block devices as requested 00:03:23.441 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:03:23.699 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:03:23.699 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:03:23.699 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:03:23.956 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:03:23.956 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:03:23.956 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:03:23.956 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:03:24.216 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:03:24.216 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:03:24.216 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:03:24.475 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:03:24.475 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:03:24.475 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:03:24.475 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:03:24.733 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:03:24.733 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:03:24.733 19:00:30 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:24.733 19:00:30 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:84:00.0 00:03:24.733 19:00:30 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:24.733 19:00:30 -- common/autotest_common.sh@1502 -- # grep 0000:84:00.0/nvme/nvme 00:03:24.733 19:00:30 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:03:24.733 19:00:30 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 ]] 00:03:24.733 19:00:30 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:03:24.733 19:00:30 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:24.733 19:00:30 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:24.733 19:00:30 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:24.733 19:00:30 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:24.733 19:00:30 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:24.733 19:00:30 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:24.733 19:00:30 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:24.733 19:00:30 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:24.733 19:00:30 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:24.733 19:00:30 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:24.733 19:00:30 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:24.733 19:00:30 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:24.733 19:00:30 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:24.733 19:00:30 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:24.733 19:00:30 -- common/autotest_common.sh@1557 -- # continue 00:03:24.733 19:00:30 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:24.733 19:00:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:24.733 19:00:30 -- common/autotest_common.sh@10 -- # set +x 00:03:24.733 19:00:30 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:24.733 19:00:30 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:24.733 19:00:30 -- common/autotest_common.sh@10 -- # set +x 00:03:24.733 19:00:30 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.107 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:26.107 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:26.107 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:26.107 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:26.107 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:26.107 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:26.107 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:26.107 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:26.107 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:26.107 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:26.107 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:26.107 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:26.107 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:26.107 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:26.107 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:26.107 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:27.041 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:03:27.041 19:00:32 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:27.041 19:00:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:27.041 19:00:32 -- common/autotest_common.sh@10 -- # set +x 00:03:27.041 19:00:32 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:27.041 19:00:32 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:27.041 19:00:32 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:27.041 19:00:32 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:27.041 19:00:32 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:27.041 19:00:32 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:27.041 19:00:32 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:27.041 19:00:32 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:27.041 19:00:32 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:27.041 19:00:32 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:27.041 19:00:32 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:27.041 19:00:32 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:27.041 19:00:32 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:84:00.0 00:03:27.041 19:00:32 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:27.041 19:00:32 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:84:00.0/device 00:03:27.041 19:00:32 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:27.041 19:00:32 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:27.041 19:00:32 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:27.041 19:00:32 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:84:00.0 00:03:27.041 19:00:32 -- common/autotest_common.sh@1592 -- # [[ -z 0000:84:00.0 ]] 00:03:27.041 19:00:32 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2463478 00:03:27.041 19:00:32 -- common/autotest_common.sh@1598 -- # waitforlisten 2463478 00:03:27.041 19:00:32 -- common/autotest_common.sh@831 -- # '[' -z 2463478 ']' 00:03:27.041 19:00:32 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:27.041 19:00:32 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:27.041 19:00:32 -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:27.041 19:00:32 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:27.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:27.041 19:00:32 -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:27.041 19:00:32 -- common/autotest_common.sh@10 -- # set +x 00:03:27.041 [2024-07-24 19:00:32.987691] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:03:27.041 [2024-07-24 19:00:32.987789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2463478 ] 00:03:27.041 EAL: No free 2048 kB hugepages reported on node 1 00:03:27.041 [2024-07-24 19:00:33.048948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:27.311 [2024-07-24 19:00:33.169183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:27.567 19:00:33 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:27.567 19:00:33 -- common/autotest_common.sh@864 -- # return 0 00:03:27.567 19:00:33 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:27.567 19:00:33 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:27.567 19:00:33 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:84:00.0 00:03:30.857 nvme0n1 00:03:30.857 19:00:36 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:30.857 [2024-07-24 19:00:36.787998] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:30.857 [2024-07-24 19:00:36.788044] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:30.857 request: 00:03:30.857 { 00:03:30.857 "nvme_ctrlr_name": "nvme0", 00:03:30.857 "password": "test", 00:03:30.857 "method": "bdev_nvme_opal_revert", 00:03:30.857 "req_id": 1 00:03:30.857 } 00:03:30.857 Got JSON-RPC error response 00:03:30.857 response: 00:03:30.857 { 00:03:30.857 "code": -32603, 00:03:30.857 "message": "Internal error" 00:03:30.857 } 00:03:30.857 19:00:36 -- common/autotest_common.sh@1604 -- # true 00:03:30.857 19:00:36 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:30.857 19:00:36 -- common/autotest_common.sh@1608 -- # killprocess 2463478 00:03:30.857 19:00:36 -- common/autotest_common.sh@950 -- # '[' -z 2463478 ']' 00:03:30.857 19:00:36 -- common/autotest_common.sh@954 -- # kill -0 2463478 00:03:30.857 19:00:36 -- common/autotest_common.sh@955 -- # uname 00:03:30.857 19:00:36 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:30.857 19:00:36 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2463478 00:03:30.857 19:00:36 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:30.857 19:00:36 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:30.857 19:00:36 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2463478' 00:03:30.857 killing process with pid 2463478 00:03:30.857 19:00:36 -- common/autotest_common.sh@969 -- # kill 2463478 00:03:30.857 19:00:36 -- common/autotest_common.sh@974 -- # wait 2463478 00:03:32.752 19:00:38 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:32.752 19:00:38 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:32.752 19:00:38 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:32.752 19:00:38 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:32.752 19:00:38 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:32.752 19:00:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:32.752 19:00:38 -- common/autotest_common.sh@10 -- # set +x 00:03:32.752 19:00:38 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:32.752 19:00:38 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:32.752 19:00:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:32.752 19:00:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:32.752 19:00:38 -- common/autotest_common.sh@10 -- # set +x 00:03:32.752 ************************************ 00:03:32.752 START TEST env 00:03:32.752 ************************************ 00:03:32.752 19:00:38 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:32.752 * Looking for test storage... 00:03:32.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:32.752 19:00:38 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:32.752 19:00:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:32.752 19:00:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:32.752 19:00:38 env -- common/autotest_common.sh@10 -- # set +x 00:03:32.752 ************************************ 00:03:32.752 START TEST env_memory 00:03:32.752 ************************************ 00:03:32.752 19:00:38 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:32.752 00:03:32.752 00:03:32.752 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.752 http://cunit.sourceforge.net/ 00:03:32.752 00:03:32.752 00:03:32.753 Suite: memory 00:03:32.753 Test: alloc and free memory map ...[2024-07-24 19:00:38.653679] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:32.753 passed 00:03:32.753 Test: mem map translation ...[2024-07-24 19:00:38.684844] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:32.753 [2024-07-24 19:00:38.684871] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:32.753 [2024-07-24 19:00:38.684924] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:32.753 [2024-07-24 19:00:38.684939] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:32.753 passed 00:03:32.753 Test: mem map registration ...[2024-07-24 19:00:38.747494] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:32.753 [2024-07-24 19:00:38.747520] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:33.011 passed 00:03:33.011 Test: mem map adjacent registrations ...passed 00:03:33.011 00:03:33.011 Run Summary: Type Total Ran Passed Failed Inactive 00:03:33.011 suites 1 1 n/a 0 0 00:03:33.011 tests 4 4 4 0 0 00:03:33.011 asserts 152 152 152 0 n/a 00:03:33.011 00:03:33.011 Elapsed time = 0.214 seconds 00:03:33.011 00:03:33.011 real 0m0.223s 00:03:33.011 user 0m0.213s 00:03:33.011 sys 0m0.009s 00:03:33.011 19:00:38 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:33.011 19:00:38 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:33.011 ************************************ 00:03:33.011 END TEST env_memory 00:03:33.011 ************************************ 00:03:33.011 19:00:38 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:33.011 19:00:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:33.011 19:00:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:33.011 19:00:38 env -- common/autotest_common.sh@10 -- # set +x 00:03:33.011 ************************************ 00:03:33.011 START TEST env_vtophys 00:03:33.011 ************************************ 00:03:33.011 19:00:38 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:33.011 EAL: lib.eal log level changed from notice to debug 00:03:33.011 EAL: Detected lcore 0 as core 0 on socket 0 00:03:33.011 EAL: Detected lcore 1 as core 1 on socket 0 00:03:33.011 EAL: Detected lcore 2 as core 2 on socket 0 00:03:33.011 EAL: Detected lcore 3 as core 3 on socket 0 00:03:33.011 EAL: Detected lcore 4 as core 4 on socket 0 00:03:33.011 EAL: Detected lcore 5 as core 5 on socket 0 00:03:33.011 EAL: Detected lcore 6 as core 6 on socket 0 00:03:33.011 EAL: Detected lcore 7 as core 7 on socket 0 00:03:33.011 EAL: Detected lcore 8 as core 0 on socket 1 00:03:33.011 EAL: Detected lcore 9 as core 1 on socket 1 00:03:33.011 EAL: Detected lcore 10 as core 2 on socket 1 00:03:33.011 EAL: Detected lcore 11 as core 3 on socket 1 00:03:33.011 EAL: Detected lcore 12 as core 4 on socket 1 00:03:33.011 EAL: Detected lcore 13 as core 5 on socket 1 00:03:33.011 EAL: Detected lcore 14 as core 6 on socket 1 00:03:33.011 EAL: Detected lcore 15 as core 7 on socket 1 00:03:33.011 EAL: Detected lcore 16 as core 0 on socket 0 00:03:33.011 EAL: Detected lcore 17 as core 1 on socket 0 00:03:33.011 EAL: Detected lcore 18 as core 2 on socket 0 00:03:33.011 EAL: Detected lcore 19 as core 3 on socket 0 00:03:33.011 EAL: Detected lcore 20 as core 4 on socket 0 00:03:33.011 EAL: Detected lcore 21 as core 5 on socket 0 00:03:33.011 EAL: Detected lcore 22 as core 6 on socket 0 00:03:33.011 EAL: Detected lcore 23 as core 7 on socket 0 00:03:33.011 EAL: Detected lcore 24 as core 0 on socket 1 00:03:33.011 EAL: Detected lcore 25 as core 1 on socket 1 00:03:33.011 EAL: Detected lcore 26 as core 2 on socket 1 00:03:33.011 EAL: Detected lcore 27 as core 3 on socket 1 00:03:33.011 EAL: Detected lcore 28 as core 4 on socket 1 00:03:33.011 EAL: Detected lcore 29 as core 5 on socket 1 00:03:33.011 EAL: Detected lcore 30 as core 6 on socket 1 00:03:33.011 EAL: Detected lcore 31 as core 7 on socket 1 00:03:33.011 EAL: Maximum logical cores by configuration: 128 00:03:33.011 EAL: Detected CPU lcores: 32 00:03:33.011 EAL: Detected NUMA nodes: 2 00:03:33.011 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:33.011 EAL: Detected shared linkage of DPDK 00:03:33.011 EAL: No shared files mode enabled, IPC will be disabled 00:03:33.011 EAL: Bus pci wants IOVA as 'DC' 00:03:33.011 EAL: Buses did not request a specific IOVA mode. 00:03:33.011 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:33.011 EAL: Selected IOVA mode 'VA' 00:03:33.011 EAL: No free 2048 kB hugepages reported on node 1 00:03:33.011 EAL: Probing VFIO support... 00:03:33.011 EAL: IOMMU type 1 (Type 1) is supported 00:03:33.011 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:33.011 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:33.011 EAL: VFIO support initialized 00:03:33.011 EAL: Ask a virtual area of 0x2e000 bytes 00:03:33.011 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:33.011 EAL: Setting up physically contiguous memory... 00:03:33.011 EAL: Setting maximum number of open files to 524288 00:03:33.011 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:33.011 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:33.011 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:33.011 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.011 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:33.011 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:33.011 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.011 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:33.011 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:33.011 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.011 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:33.011 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:33.011 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.011 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:33.011 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:33.011 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.011 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:33.011 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:33.011 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.011 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:33.011 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:33.012 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.012 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:33.012 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:33.012 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.012 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:33.012 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:33.012 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:33.012 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.012 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:33.012 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:33.012 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.012 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:33.012 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:33.012 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.012 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:33.012 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:33.012 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.012 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:33.012 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:33.012 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.012 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:33.012 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:33.012 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.012 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:33.012 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:33.012 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.012 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:33.012 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:33.012 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.012 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:33.012 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:33.012 EAL: Hugepages will be freed exactly as allocated. 00:03:33.012 EAL: No shared files mode enabled, IPC is disabled 00:03:33.012 EAL: No shared files mode enabled, IPC is disabled 00:03:33.012 EAL: TSC frequency is ~2700000 KHz 00:03:33.012 EAL: Main lcore 0 is ready (tid=7f3b802faa00;cpuset=[0]) 00:03:33.012 EAL: Trying to obtain current memory policy. 00:03:33.012 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.012 EAL: Restoring previous memory policy: 0 00:03:33.012 EAL: request: mp_malloc_sync 00:03:33.012 EAL: No shared files mode enabled, IPC is disabled 00:03:33.012 EAL: Heap on socket 0 was expanded by 2MB 00:03:33.012 EAL: No shared files mode enabled, IPC is disabled 00:03:33.012 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:33.012 EAL: Mem event callback 'spdk:(nil)' registered 00:03:33.012 00:03:33.012 00:03:33.012 CUnit - A unit testing framework for C - Version 2.1-3 00:03:33.012 http://cunit.sourceforge.net/ 00:03:33.012 00:03:33.012 00:03:33.012 Suite: components_suite 00:03:33.012 Test: vtophys_malloc_test ...passed 00:03:33.012 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:33.012 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.012 EAL: Restoring previous memory policy: 4 00:03:33.012 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.012 EAL: request: mp_malloc_sync 00:03:33.012 EAL: No shared files mode enabled, IPC is disabled 00:03:33.012 EAL: Heap on socket 0 was expanded by 4MB 00:03:33.012 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.012 EAL: request: mp_malloc_sync 00:03:33.012 EAL: No shared files mode enabled, IPC is disabled 00:03:33.012 EAL: Heap on socket 0 was shrunk by 4MB 00:03:33.012 EAL: Trying to obtain current memory policy. 00:03:33.012 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.012 EAL: Restoring previous memory policy: 4 00:03:33.012 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.012 EAL: request: mp_malloc_sync 00:03:33.012 EAL: No shared files mode enabled, IPC is disabled 00:03:33.012 EAL: Heap on socket 0 was expanded by 6MB 00:03:33.012 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.012 EAL: request: mp_malloc_sync 00:03:33.012 EAL: No shared files mode enabled, IPC is disabled 00:03:33.012 EAL: Heap on socket 0 was shrunk by 6MB 00:03:33.012 EAL: Trying to obtain current memory policy. 00:03:33.012 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.012 EAL: Restoring previous memory policy: 4 00:03:33.012 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.012 EAL: request: mp_malloc_sync 00:03:33.012 EAL: No shared files mode enabled, IPC is disabled 00:03:33.012 EAL: Heap on socket 0 was expanded by 10MB 00:03:33.012 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.012 EAL: request: mp_malloc_sync 00:03:33.012 EAL: No shared files mode enabled, IPC is disabled 00:03:33.012 EAL: Heap on socket 0 was shrunk by 10MB 00:03:33.012 EAL: Trying to obtain current memory policy. 00:03:33.012 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.012 EAL: Restoring previous memory policy: 4 00:03:33.012 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.012 EAL: request: mp_malloc_sync 00:03:33.012 EAL: No shared files mode enabled, IPC is disabled 00:03:33.012 EAL: Heap on socket 0 was expanded by 18MB 00:03:33.012 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.012 EAL: request: mp_malloc_sync 00:03:33.012 EAL: No shared files mode enabled, IPC is disabled 00:03:33.012 EAL: Heap on socket 0 was shrunk by 18MB 00:03:33.012 EAL: Trying to obtain current memory policy. 00:03:33.012 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.012 EAL: Restoring previous memory policy: 4 00:03:33.012 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.012 EAL: request: mp_malloc_sync 00:03:33.012 EAL: No shared files mode enabled, IPC is disabled 00:03:33.012 EAL: Heap on socket 0 was expanded by 34MB 00:03:33.012 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.012 EAL: request: mp_malloc_sync 00:03:33.012 EAL: No shared files mode enabled, IPC is disabled 00:03:33.012 EAL: Heap on socket 0 was shrunk by 34MB 00:03:33.012 EAL: Trying to obtain current memory policy. 00:03:33.012 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.012 EAL: Restoring previous memory policy: 4 00:03:33.012 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.012 EAL: request: mp_malloc_sync 00:03:33.012 EAL: No shared files mode enabled, IPC is disabled 00:03:33.012 EAL: Heap on socket 0 was expanded by 66MB 00:03:33.012 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.012 EAL: request: mp_malloc_sync 00:03:33.012 EAL: No shared files mode enabled, IPC is disabled 00:03:33.012 EAL: Heap on socket 0 was shrunk by 66MB 00:03:33.012 EAL: Trying to obtain current memory policy. 00:03:33.012 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.270 EAL: Restoring previous memory policy: 4 00:03:33.270 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.270 EAL: request: mp_malloc_sync 00:03:33.270 EAL: No shared files mode enabled, IPC is disabled 00:03:33.270 EAL: Heap on socket 0 was expanded by 130MB 00:03:33.270 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.270 EAL: request: mp_malloc_sync 00:03:33.270 EAL: No shared files mode enabled, IPC is disabled 00:03:33.270 EAL: Heap on socket 0 was shrunk by 130MB 00:03:33.270 EAL: Trying to obtain current memory policy. 00:03:33.270 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.270 EAL: Restoring previous memory policy: 4 00:03:33.270 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.270 EAL: request: mp_malloc_sync 00:03:33.270 EAL: No shared files mode enabled, IPC is disabled 00:03:33.270 EAL: Heap on socket 0 was expanded by 258MB 00:03:33.270 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.270 EAL: request: mp_malloc_sync 00:03:33.270 EAL: No shared files mode enabled, IPC is disabled 00:03:33.270 EAL: Heap on socket 0 was shrunk by 258MB 00:03:33.270 EAL: Trying to obtain current memory policy. 00:03:33.270 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.528 EAL: Restoring previous memory policy: 4 00:03:33.528 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.528 EAL: request: mp_malloc_sync 00:03:33.528 EAL: No shared files mode enabled, IPC is disabled 00:03:33.528 EAL: Heap on socket 0 was expanded by 514MB 00:03:33.528 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.528 EAL: request: mp_malloc_sync 00:03:33.528 EAL: No shared files mode enabled, IPC is disabled 00:03:33.528 EAL: Heap on socket 0 was shrunk by 514MB 00:03:33.528 EAL: Trying to obtain current memory policy. 00:03:33.528 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.786 EAL: Restoring previous memory policy: 4 00:03:33.786 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.786 EAL: request: mp_malloc_sync 00:03:33.786 EAL: No shared files mode enabled, IPC is disabled 00:03:33.786 EAL: Heap on socket 0 was expanded by 1026MB 00:03:34.044 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.044 EAL: request: mp_malloc_sync 00:03:34.044 EAL: No shared files mode enabled, IPC is disabled 00:03:34.044 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:34.044 passed 00:03:34.044 00:03:34.044 Run Summary: Type Total Ran Passed Failed Inactive 00:03:34.044 suites 1 1 n/a 0 0 00:03:34.044 tests 2 2 2 0 0 00:03:34.044 asserts 497 497 497 0 n/a 00:03:34.044 00:03:34.044 Elapsed time = 0.945 seconds 00:03:34.044 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.044 EAL: request: mp_malloc_sync 00:03:34.044 EAL: No shared files mode enabled, IPC is disabled 00:03:34.044 EAL: Heap on socket 0 was shrunk by 2MB 00:03:34.044 EAL: No shared files mode enabled, IPC is disabled 00:03:34.044 EAL: No shared files mode enabled, IPC is disabled 00:03:34.044 EAL: No shared files mode enabled, IPC is disabled 00:03:34.044 00:03:34.044 real 0m1.060s 00:03:34.044 user 0m0.510s 00:03:34.044 sys 0m0.515s 00:03:34.044 19:00:39 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:34.044 19:00:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:34.044 ************************************ 00:03:34.044 END TEST env_vtophys 00:03:34.044 ************************************ 00:03:34.044 19:00:39 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:34.044 19:00:39 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:34.044 19:00:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:34.044 19:00:39 env -- common/autotest_common.sh@10 -- # set +x 00:03:34.044 ************************************ 00:03:34.044 START TEST env_pci 00:03:34.044 ************************************ 00:03:34.045 19:00:40 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:34.045 00:03:34.045 00:03:34.045 CUnit - A unit testing framework for C - Version 2.1-3 00:03:34.045 http://cunit.sourceforge.net/ 00:03:34.045 00:03:34.045 00:03:34.045 Suite: pci 00:03:34.045 Test: pci_hook ...[2024-07-24 19:00:40.014714] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2464216 has claimed it 00:03:34.045 EAL: Cannot find device (10000:00:01.0) 00:03:34.045 EAL: Failed to attach device on primary process 00:03:34.045 passed 00:03:34.045 00:03:34.045 Run Summary: Type Total Ran Passed Failed Inactive 00:03:34.045 suites 1 1 n/a 0 0 00:03:34.045 tests 1 1 1 0 0 00:03:34.045 asserts 25 25 25 0 n/a 00:03:34.045 00:03:34.045 Elapsed time = 0.017 seconds 00:03:34.045 00:03:34.045 real 0m0.030s 00:03:34.045 user 0m0.012s 00:03:34.045 sys 0m0.018s 00:03:34.045 19:00:40 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:34.045 19:00:40 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:34.045 ************************************ 00:03:34.045 END TEST env_pci 00:03:34.045 ************************************ 00:03:34.304 19:00:40 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:34.304 19:00:40 env -- env/env.sh@15 -- # uname 00:03:34.305 19:00:40 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:34.305 19:00:40 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:34.305 19:00:40 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:34.305 19:00:40 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:03:34.305 19:00:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:34.305 19:00:40 env -- common/autotest_common.sh@10 -- # set +x 00:03:34.305 ************************************ 00:03:34.305 START TEST env_dpdk_post_init 00:03:34.305 ************************************ 00:03:34.305 19:00:40 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:34.305 EAL: Detected CPU lcores: 32 00:03:34.305 EAL: Detected NUMA nodes: 2 00:03:34.305 EAL: Detected shared linkage of DPDK 00:03:34.305 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:34.305 EAL: Selected IOVA mode 'VA' 00:03:34.305 EAL: No free 2048 kB hugepages reported on node 1 00:03:34.305 EAL: VFIO support initialized 00:03:34.305 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:34.305 EAL: Using IOMMU type 1 (Type 1) 00:03:34.305 EAL: Probe PCI driver: spdk_ioat (8086:3c20) device: 0000:00:04.0 (socket 0) 00:03:34.305 EAL: Probe PCI driver: spdk_ioat (8086:3c21) device: 0000:00:04.1 (socket 0) 00:03:34.305 EAL: Probe PCI driver: spdk_ioat (8086:3c22) device: 0000:00:04.2 (socket 0) 00:03:34.305 EAL: Probe PCI driver: spdk_ioat (8086:3c23) device: 0000:00:04.3 (socket 0) 00:03:34.305 EAL: Probe PCI driver: spdk_ioat (8086:3c24) device: 0000:00:04.4 (socket 0) 00:03:34.305 EAL: Probe PCI driver: spdk_ioat (8086:3c25) device: 0000:00:04.5 (socket 0) 00:03:34.305 EAL: Probe PCI driver: spdk_ioat (8086:3c26) device: 0000:00:04.6 (socket 0) 00:03:34.305 EAL: Probe PCI driver: spdk_ioat (8086:3c27) device: 0000:00:04.7 (socket 0) 00:03:34.305 EAL: Probe PCI driver: spdk_ioat (8086:3c20) device: 0000:80:04.0 (socket 1) 00:03:34.305 EAL: Probe PCI driver: spdk_ioat (8086:3c21) device: 0000:80:04.1 (socket 1) 00:03:34.562 EAL: Probe PCI driver: spdk_ioat (8086:3c22) device: 0000:80:04.2 (socket 1) 00:03:34.562 EAL: Probe PCI driver: spdk_ioat (8086:3c23) device: 0000:80:04.3 (socket 1) 00:03:34.562 EAL: Probe PCI driver: spdk_ioat (8086:3c24) device: 0000:80:04.4 (socket 1) 00:03:34.562 EAL: Probe PCI driver: spdk_ioat (8086:3c25) device: 0000:80:04.5 (socket 1) 00:03:34.562 EAL: Probe PCI driver: spdk_ioat (8086:3c26) device: 0000:80:04.6 (socket 1) 00:03:34.562 EAL: Probe PCI driver: spdk_ioat (8086:3c27) device: 0000:80:04.7 (socket 1) 00:03:35.127 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:84:00.0 (socket 1) 00:03:38.403 EAL: Releasing PCI mapped resource for 0000:84:00.0 00:03:38.403 EAL: Calling pci_unmap_resource for 0000:84:00.0 at 0x202001040000 00:03:38.662 Starting DPDK initialization... 00:03:38.662 Starting SPDK post initialization... 00:03:38.662 SPDK NVMe probe 00:03:38.662 Attaching to 0000:84:00.0 00:03:38.662 Attached to 0000:84:00.0 00:03:38.662 Cleaning up... 00:03:38.662 00:03:38.662 real 0m4.373s 00:03:38.662 user 0m3.253s 00:03:38.662 sys 0m0.189s 00:03:38.662 19:00:44 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:38.662 19:00:44 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:38.662 ************************************ 00:03:38.662 END TEST env_dpdk_post_init 00:03:38.662 ************************************ 00:03:38.662 19:00:44 env -- env/env.sh@26 -- # uname 00:03:38.662 19:00:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:38.662 19:00:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:38.662 19:00:44 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:38.662 19:00:44 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:38.662 19:00:44 env -- common/autotest_common.sh@10 -- # set +x 00:03:38.662 ************************************ 00:03:38.662 START TEST env_mem_callbacks 00:03:38.662 ************************************ 00:03:38.662 19:00:44 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:38.662 EAL: Detected CPU lcores: 32 00:03:38.662 EAL: Detected NUMA nodes: 2 00:03:38.662 EAL: Detected shared linkage of DPDK 00:03:38.662 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:38.662 EAL: Selected IOVA mode 'VA' 00:03:38.662 EAL: No free 2048 kB hugepages reported on node 1 00:03:38.662 EAL: VFIO support initialized 00:03:38.662 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:38.662 00:03:38.662 00:03:38.662 CUnit - A unit testing framework for C - Version 2.1-3 00:03:38.662 http://cunit.sourceforge.net/ 00:03:38.662 00:03:38.662 00:03:38.662 Suite: memory 00:03:38.662 Test: test ... 00:03:38.662 register 0x200000200000 2097152 00:03:38.662 malloc 3145728 00:03:38.662 register 0x200000400000 4194304 00:03:38.662 buf 0x200000500000 len 3145728 PASSED 00:03:38.662 malloc 64 00:03:38.662 buf 0x2000004fff40 len 64 PASSED 00:03:38.662 malloc 4194304 00:03:38.662 register 0x200000800000 6291456 00:03:38.662 buf 0x200000a00000 len 4194304 PASSED 00:03:38.662 free 0x200000500000 3145728 00:03:38.662 free 0x2000004fff40 64 00:03:38.662 unregister 0x200000400000 4194304 PASSED 00:03:38.662 free 0x200000a00000 4194304 00:03:38.662 unregister 0x200000800000 6291456 PASSED 00:03:38.662 malloc 8388608 00:03:38.662 register 0x200000400000 10485760 00:03:38.662 buf 0x200000600000 len 8388608 PASSED 00:03:38.662 free 0x200000600000 8388608 00:03:38.662 unregister 0x200000400000 10485760 PASSED 00:03:38.662 passed 00:03:38.662 00:03:38.662 Run Summary: Type Total Ran Passed Failed Inactive 00:03:38.662 suites 1 1 n/a 0 0 00:03:38.662 tests 1 1 1 0 0 00:03:38.662 asserts 15 15 15 0 n/a 00:03:38.662 00:03:38.662 Elapsed time = 0.005 seconds 00:03:38.662 00:03:38.662 real 0m0.047s 00:03:38.662 user 0m0.015s 00:03:38.662 sys 0m0.031s 00:03:38.662 19:00:44 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:38.662 19:00:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:38.662 ************************************ 00:03:38.663 END TEST env_mem_callbacks 00:03:38.663 ************************************ 00:03:38.663 00:03:38.663 real 0m6.064s 00:03:38.663 user 0m4.123s 00:03:38.663 sys 0m0.989s 00:03:38.663 19:00:44 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:38.663 19:00:44 env -- common/autotest_common.sh@10 -- # set +x 00:03:38.663 ************************************ 00:03:38.663 END TEST env 00:03:38.663 ************************************ 00:03:38.663 19:00:44 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:38.663 19:00:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:38.663 19:00:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:38.663 19:00:44 -- common/autotest_common.sh@10 -- # set +x 00:03:38.663 ************************************ 00:03:38.663 START TEST rpc 00:03:38.663 ************************************ 00:03:38.663 19:00:44 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:38.920 * Looking for test storage... 00:03:38.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:38.921 19:00:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2464749 00:03:38.921 19:00:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:38.921 19:00:44 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:38.921 19:00:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2464749 00:03:38.921 19:00:44 rpc -- common/autotest_common.sh@831 -- # '[' -z 2464749 ']' 00:03:38.921 19:00:44 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:38.921 19:00:44 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:38.921 19:00:44 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:38.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:38.921 19:00:44 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:38.921 19:00:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.921 [2024-07-24 19:00:44.763426] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:03:38.921 [2024-07-24 19:00:44.763532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2464749 ] 00:03:38.921 EAL: No free 2048 kB hugepages reported on node 1 00:03:38.921 [2024-07-24 19:00:44.839406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:39.179 [2024-07-24 19:00:44.992884] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:39.179 [2024-07-24 19:00:44.992963] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2464749' to capture a snapshot of events at runtime. 00:03:39.179 [2024-07-24 19:00:44.992994] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:39.179 [2024-07-24 19:00:44.993020] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:39.179 [2024-07-24 19:00:44.993043] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2464749 for offline analysis/debug. 00:03:39.179 [2024-07-24 19:00:44.993098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:40.110 19:00:45 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:40.110 19:00:45 rpc -- common/autotest_common.sh@864 -- # return 0 00:03:40.110 19:00:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:40.110 19:00:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:40.110 19:00:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:40.110 19:00:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:40.110 19:00:45 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.110 19:00:45 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.110 19:00:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.110 ************************************ 00:03:40.110 START TEST rpc_integrity 00:03:40.110 ************************************ 00:03:40.110 19:00:45 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:40.110 19:00:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:40.110 19:00:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.110 19:00:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.110 19:00:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.110 19:00:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:40.110 19:00:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:40.110 19:00:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:40.110 19:00:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:40.110 19:00:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.110 19:00:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.110 19:00:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.110 19:00:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:40.110 19:00:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:40.110 19:00:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.110 19:00:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.110 19:00:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.110 19:00:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:40.110 { 00:03:40.110 "name": "Malloc0", 00:03:40.110 "aliases": [ 00:03:40.110 "5110401e-4574-402b-b1fb-1a096c122d50" 00:03:40.110 ], 00:03:40.110 "product_name": "Malloc disk", 00:03:40.110 "block_size": 512, 00:03:40.110 "num_blocks": 16384, 00:03:40.110 "uuid": "5110401e-4574-402b-b1fb-1a096c122d50", 00:03:40.110 "assigned_rate_limits": { 00:03:40.110 "rw_ios_per_sec": 0, 00:03:40.110 "rw_mbytes_per_sec": 0, 00:03:40.110 "r_mbytes_per_sec": 0, 00:03:40.110 "w_mbytes_per_sec": 0 00:03:40.110 }, 00:03:40.110 "claimed": false, 00:03:40.110 "zoned": false, 00:03:40.110 "supported_io_types": { 00:03:40.110 "read": true, 00:03:40.110 "write": true, 00:03:40.110 "unmap": true, 00:03:40.110 "flush": true, 00:03:40.110 "reset": true, 00:03:40.110 "nvme_admin": false, 00:03:40.110 "nvme_io": false, 00:03:40.110 "nvme_io_md": false, 00:03:40.110 "write_zeroes": true, 00:03:40.110 "zcopy": true, 00:03:40.110 "get_zone_info": false, 00:03:40.111 "zone_management": false, 00:03:40.111 "zone_append": false, 00:03:40.111 "compare": false, 00:03:40.111 "compare_and_write": false, 00:03:40.111 "abort": true, 00:03:40.111 "seek_hole": false, 00:03:40.111 "seek_data": false, 00:03:40.111 "copy": true, 00:03:40.111 "nvme_iov_md": false 00:03:40.111 }, 00:03:40.111 "memory_domains": [ 00:03:40.111 { 00:03:40.111 "dma_device_id": "system", 00:03:40.111 "dma_device_type": 1 00:03:40.111 }, 00:03:40.111 { 00:03:40.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:40.111 "dma_device_type": 2 00:03:40.111 } 00:03:40.111 ], 00:03:40.111 "driver_specific": {} 00:03:40.111 } 00:03:40.111 ]' 00:03:40.111 19:00:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:40.111 19:00:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:40.111 19:00:45 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:40.111 19:00:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.111 19:00:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.111 [2024-07-24 19:00:45.946494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:40.111 [2024-07-24 19:00:45.946553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:40.111 [2024-07-24 19:00:45.946577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1670380 00:03:40.111 [2024-07-24 19:00:45.946592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:40.111 [2024-07-24 19:00:45.948150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:40.111 [2024-07-24 19:00:45.948176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:40.111 Passthru0 00:03:40.111 19:00:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.111 19:00:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:40.111 19:00:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.111 19:00:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.111 19:00:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.111 19:00:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:40.111 { 00:03:40.111 "name": "Malloc0", 00:03:40.111 "aliases": [ 00:03:40.111 "5110401e-4574-402b-b1fb-1a096c122d50" 00:03:40.111 ], 00:03:40.111 "product_name": "Malloc disk", 00:03:40.111 "block_size": 512, 00:03:40.111 "num_blocks": 16384, 00:03:40.111 "uuid": "5110401e-4574-402b-b1fb-1a096c122d50", 00:03:40.111 "assigned_rate_limits": { 00:03:40.111 "rw_ios_per_sec": 0, 00:03:40.111 "rw_mbytes_per_sec": 0, 00:03:40.111 "r_mbytes_per_sec": 0, 00:03:40.111 "w_mbytes_per_sec": 0 00:03:40.111 }, 00:03:40.111 "claimed": true, 00:03:40.111 "claim_type": "exclusive_write", 00:03:40.111 "zoned": false, 00:03:40.111 "supported_io_types": { 00:03:40.111 "read": true, 00:03:40.111 "write": true, 00:03:40.111 "unmap": true, 00:03:40.111 "flush": true, 00:03:40.111 "reset": true, 00:03:40.111 "nvme_admin": false, 00:03:40.111 "nvme_io": false, 00:03:40.111 "nvme_io_md": false, 00:03:40.111 "write_zeroes": true, 00:03:40.111 "zcopy": true, 00:03:40.111 "get_zone_info": false, 00:03:40.111 "zone_management": false, 00:03:40.111 "zone_append": false, 00:03:40.111 "compare": false, 00:03:40.111 "compare_and_write": false, 00:03:40.111 "abort": true, 00:03:40.111 "seek_hole": false, 00:03:40.111 "seek_data": false, 00:03:40.111 "copy": true, 00:03:40.111 "nvme_iov_md": false 00:03:40.111 }, 00:03:40.111 "memory_domains": [ 00:03:40.111 { 00:03:40.111 "dma_device_id": "system", 00:03:40.111 "dma_device_type": 1 00:03:40.111 }, 00:03:40.111 { 00:03:40.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:40.111 "dma_device_type": 2 00:03:40.111 } 00:03:40.111 ], 00:03:40.111 "driver_specific": {} 00:03:40.111 }, 00:03:40.111 { 00:03:40.111 "name": "Passthru0", 00:03:40.111 "aliases": [ 00:03:40.111 "f3853973-99a1-53c7-a438-c3dd2965bb03" 00:03:40.111 ], 00:03:40.111 "product_name": "passthru", 00:03:40.111 "block_size": 512, 00:03:40.111 "num_blocks": 16384, 00:03:40.111 "uuid": "f3853973-99a1-53c7-a438-c3dd2965bb03", 00:03:40.111 "assigned_rate_limits": { 00:03:40.111 "rw_ios_per_sec": 0, 00:03:40.111 "rw_mbytes_per_sec": 0, 00:03:40.111 "r_mbytes_per_sec": 0, 00:03:40.111 "w_mbytes_per_sec": 0 00:03:40.111 }, 00:03:40.111 "claimed": false, 00:03:40.111 "zoned": false, 00:03:40.111 "supported_io_types": { 00:03:40.111 "read": true, 00:03:40.111 "write": true, 00:03:40.111 "unmap": true, 00:03:40.111 "flush": true, 00:03:40.111 "reset": true, 00:03:40.111 "nvme_admin": false, 00:03:40.111 "nvme_io": false, 00:03:40.111 "nvme_io_md": false, 00:03:40.111 "write_zeroes": true, 00:03:40.111 "zcopy": true, 00:03:40.111 "get_zone_info": false, 00:03:40.111 "zone_management": false, 00:03:40.111 "zone_append": false, 00:03:40.111 "compare": false, 00:03:40.111 "compare_and_write": false, 00:03:40.111 "abort": true, 00:03:40.111 "seek_hole": false, 00:03:40.111 "seek_data": false, 00:03:40.111 "copy": true, 00:03:40.111 "nvme_iov_md": false 00:03:40.111 }, 00:03:40.111 "memory_domains": [ 00:03:40.111 { 00:03:40.111 "dma_device_id": "system", 00:03:40.111 "dma_device_type": 1 00:03:40.111 }, 00:03:40.111 { 00:03:40.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:40.111 "dma_device_type": 2 00:03:40.111 } 00:03:40.111 ], 00:03:40.111 "driver_specific": { 00:03:40.111 "passthru": { 00:03:40.111 "name": "Passthru0", 00:03:40.111 "base_bdev_name": "Malloc0" 00:03:40.111 } 00:03:40.111 } 00:03:40.111 } 00:03:40.111 ]' 00:03:40.111 19:00:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:40.111 19:00:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:40.111 19:00:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:40.111 19:00:46 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.111 19:00:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.111 19:00:46 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.111 19:00:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:40.111 19:00:46 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.111 19:00:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.111 19:00:46 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.111 19:00:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:40.111 19:00:46 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.111 19:00:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.111 19:00:46 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.111 19:00:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:40.111 19:00:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:40.111 19:00:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:40.111 00:03:40.111 real 0m0.255s 00:03:40.111 user 0m0.158s 00:03:40.111 sys 0m0.036s 00:03:40.111 19:00:46 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:40.111 19:00:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.111 ************************************ 00:03:40.111 END TEST rpc_integrity 00:03:40.111 ************************************ 00:03:40.111 19:00:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:40.111 19:00:46 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.111 19:00:46 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.111 19:00:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.111 ************************************ 00:03:40.111 START TEST rpc_plugins 00:03:40.111 ************************************ 00:03:40.111 19:00:46 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:03:40.111 19:00:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:40.111 19:00:46 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.111 19:00:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:40.375 19:00:46 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.375 19:00:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:40.375 19:00:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:40.375 19:00:46 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.375 19:00:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:40.375 19:00:46 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.375 19:00:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:40.375 { 00:03:40.375 "name": "Malloc1", 00:03:40.375 "aliases": [ 00:03:40.375 "97a45470-2256-498c-88c7-5b7012623173" 00:03:40.375 ], 00:03:40.375 "product_name": "Malloc disk", 00:03:40.375 "block_size": 4096, 00:03:40.375 "num_blocks": 256, 00:03:40.375 "uuid": "97a45470-2256-498c-88c7-5b7012623173", 00:03:40.375 "assigned_rate_limits": { 00:03:40.375 "rw_ios_per_sec": 0, 00:03:40.375 "rw_mbytes_per_sec": 0, 00:03:40.375 "r_mbytes_per_sec": 0, 00:03:40.375 "w_mbytes_per_sec": 0 00:03:40.375 }, 00:03:40.375 "claimed": false, 00:03:40.375 "zoned": false, 00:03:40.375 "supported_io_types": { 00:03:40.375 "read": true, 00:03:40.375 "write": true, 00:03:40.375 "unmap": true, 00:03:40.375 "flush": true, 00:03:40.375 "reset": true, 00:03:40.375 "nvme_admin": false, 00:03:40.375 "nvme_io": false, 00:03:40.375 "nvme_io_md": false, 00:03:40.375 "write_zeroes": true, 00:03:40.375 "zcopy": true, 00:03:40.375 "get_zone_info": false, 00:03:40.375 "zone_management": false, 00:03:40.375 "zone_append": false, 00:03:40.375 "compare": false, 00:03:40.375 "compare_and_write": false, 00:03:40.375 "abort": true, 00:03:40.375 "seek_hole": false, 00:03:40.375 "seek_data": false, 00:03:40.375 "copy": true, 00:03:40.375 "nvme_iov_md": false 00:03:40.375 }, 00:03:40.375 "memory_domains": [ 00:03:40.375 { 00:03:40.375 "dma_device_id": "system", 00:03:40.375 "dma_device_type": 1 00:03:40.375 }, 00:03:40.375 { 00:03:40.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:40.375 "dma_device_type": 2 00:03:40.375 } 00:03:40.375 ], 00:03:40.375 "driver_specific": {} 00:03:40.375 } 00:03:40.375 ]' 00:03:40.375 19:00:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:40.375 19:00:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:40.375 19:00:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:40.375 19:00:46 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.375 19:00:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:40.375 19:00:46 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.375 19:00:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:40.375 19:00:46 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.375 19:00:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:40.375 19:00:46 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.375 19:00:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:40.375 19:00:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:40.375 19:00:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:40.375 00:03:40.375 real 0m0.119s 00:03:40.375 user 0m0.080s 00:03:40.375 sys 0m0.010s 00:03:40.375 19:00:46 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:40.375 19:00:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:40.375 ************************************ 00:03:40.375 END TEST rpc_plugins 00:03:40.375 ************************************ 00:03:40.375 19:00:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:40.375 19:00:46 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.375 19:00:46 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.375 19:00:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.375 ************************************ 00:03:40.375 START TEST rpc_trace_cmd_test 00:03:40.375 ************************************ 00:03:40.375 19:00:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:03:40.375 19:00:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:40.375 19:00:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:40.375 19:00:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.375 19:00:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:40.375 19:00:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.375 19:00:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:40.375 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2464749", 00:03:40.375 "tpoint_group_mask": "0x8", 00:03:40.375 "iscsi_conn": { 00:03:40.375 "mask": "0x2", 00:03:40.375 "tpoint_mask": "0x0" 00:03:40.375 }, 00:03:40.375 "scsi": { 00:03:40.375 "mask": "0x4", 00:03:40.375 "tpoint_mask": "0x0" 00:03:40.375 }, 00:03:40.375 "bdev": { 00:03:40.375 "mask": "0x8", 00:03:40.375 "tpoint_mask": "0xffffffffffffffff" 00:03:40.375 }, 00:03:40.375 "nvmf_rdma": { 00:03:40.375 "mask": "0x10", 00:03:40.375 "tpoint_mask": "0x0" 00:03:40.375 }, 00:03:40.375 "nvmf_tcp": { 00:03:40.375 "mask": "0x20", 00:03:40.375 "tpoint_mask": "0x0" 00:03:40.375 }, 00:03:40.375 "ftl": { 00:03:40.375 "mask": "0x40", 00:03:40.375 "tpoint_mask": "0x0" 00:03:40.375 }, 00:03:40.375 "blobfs": { 00:03:40.375 "mask": "0x80", 00:03:40.375 "tpoint_mask": "0x0" 00:03:40.375 }, 00:03:40.375 "dsa": { 00:03:40.375 "mask": "0x200", 00:03:40.375 "tpoint_mask": "0x0" 00:03:40.375 }, 00:03:40.375 "thread": { 00:03:40.375 "mask": "0x400", 00:03:40.375 "tpoint_mask": "0x0" 00:03:40.375 }, 00:03:40.375 "nvme_pcie": { 00:03:40.375 "mask": "0x800", 00:03:40.375 "tpoint_mask": "0x0" 00:03:40.375 }, 00:03:40.375 "iaa": { 00:03:40.375 "mask": "0x1000", 00:03:40.375 "tpoint_mask": "0x0" 00:03:40.375 }, 00:03:40.375 "nvme_tcp": { 00:03:40.375 "mask": "0x2000", 00:03:40.375 "tpoint_mask": "0x0" 00:03:40.375 }, 00:03:40.375 "bdev_nvme": { 00:03:40.375 "mask": "0x4000", 00:03:40.375 "tpoint_mask": "0x0" 00:03:40.375 }, 00:03:40.375 "sock": { 00:03:40.375 "mask": "0x8000", 00:03:40.375 "tpoint_mask": "0x0" 00:03:40.375 } 00:03:40.375 }' 00:03:40.375 19:00:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:40.375 19:00:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:40.375 19:00:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:40.663 19:00:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:40.663 19:00:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:40.663 19:00:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:40.663 19:00:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:40.663 19:00:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:40.663 19:00:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:40.663 19:00:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:40.663 00:03:40.663 real 0m0.215s 00:03:40.663 user 0m0.186s 00:03:40.663 sys 0m0.020s 00:03:40.663 19:00:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:40.663 19:00:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:40.663 ************************************ 00:03:40.663 END TEST rpc_trace_cmd_test 00:03:40.663 ************************************ 00:03:40.663 19:00:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:40.663 19:00:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:40.663 19:00:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:40.663 19:00:46 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.663 19:00:46 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.663 19:00:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.663 ************************************ 00:03:40.663 START TEST rpc_daemon_integrity 00:03:40.663 ************************************ 00:03:40.663 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:40.663 19:00:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:40.663 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.663 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.663 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.664 19:00:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:40.664 19:00:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:40.664 19:00:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:40.664 19:00:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:40.664 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.664 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.664 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.664 19:00:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:40.664 19:00:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:40.664 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.664 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.664 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.664 19:00:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:40.664 { 00:03:40.664 "name": "Malloc2", 00:03:40.664 "aliases": [ 00:03:40.664 "8a5e72cd-9ce4-4ea5-8eb9-19407d08950f" 00:03:40.664 ], 00:03:40.664 "product_name": "Malloc disk", 00:03:40.664 "block_size": 512, 00:03:40.664 "num_blocks": 16384, 00:03:40.664 "uuid": "8a5e72cd-9ce4-4ea5-8eb9-19407d08950f", 00:03:40.664 "assigned_rate_limits": { 00:03:40.664 "rw_ios_per_sec": 0, 00:03:40.664 "rw_mbytes_per_sec": 0, 00:03:40.664 "r_mbytes_per_sec": 0, 00:03:40.664 "w_mbytes_per_sec": 0 00:03:40.664 }, 00:03:40.664 "claimed": false, 00:03:40.664 "zoned": false, 00:03:40.664 "supported_io_types": { 00:03:40.664 "read": true, 00:03:40.664 "write": true, 00:03:40.664 "unmap": true, 00:03:40.664 "flush": true, 00:03:40.664 "reset": true, 00:03:40.664 "nvme_admin": false, 00:03:40.664 "nvme_io": false, 00:03:40.664 "nvme_io_md": false, 00:03:40.664 "write_zeroes": true, 00:03:40.664 "zcopy": true, 00:03:40.664 "get_zone_info": false, 00:03:40.664 "zone_management": false, 00:03:40.664 "zone_append": false, 00:03:40.664 "compare": false, 00:03:40.664 "compare_and_write": false, 00:03:40.664 "abort": true, 00:03:40.664 "seek_hole": false, 00:03:40.664 "seek_data": false, 00:03:40.664 "copy": true, 00:03:40.664 "nvme_iov_md": false 00:03:40.664 }, 00:03:40.664 "memory_domains": [ 00:03:40.664 { 00:03:40.664 "dma_device_id": "system", 00:03:40.664 "dma_device_type": 1 00:03:40.664 }, 00:03:40.664 { 00:03:40.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:40.664 "dma_device_type": 2 00:03:40.664 } 00:03:40.664 ], 00:03:40.664 "driver_specific": {} 00:03:40.664 } 00:03:40.664 ]' 00:03:40.664 19:00:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.942 [2024-07-24 19:00:46.680701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:40.942 [2024-07-24 19:00:46.680750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:40.942 [2024-07-24 19:00:46.680788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14be0c0 00:03:40.942 [2024-07-24 19:00:46.680815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:40.942 [2024-07-24 19:00:46.682312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:40.942 [2024-07-24 19:00:46.682340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:40.942 Passthru0 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:40.942 { 00:03:40.942 "name": "Malloc2", 00:03:40.942 "aliases": [ 00:03:40.942 "8a5e72cd-9ce4-4ea5-8eb9-19407d08950f" 00:03:40.942 ], 00:03:40.942 "product_name": "Malloc disk", 00:03:40.942 "block_size": 512, 00:03:40.942 "num_blocks": 16384, 00:03:40.942 "uuid": "8a5e72cd-9ce4-4ea5-8eb9-19407d08950f", 00:03:40.942 "assigned_rate_limits": { 00:03:40.942 "rw_ios_per_sec": 0, 00:03:40.942 "rw_mbytes_per_sec": 0, 00:03:40.942 "r_mbytes_per_sec": 0, 00:03:40.942 "w_mbytes_per_sec": 0 00:03:40.942 }, 00:03:40.942 "claimed": true, 00:03:40.942 "claim_type": "exclusive_write", 00:03:40.942 "zoned": false, 00:03:40.942 "supported_io_types": { 00:03:40.942 "read": true, 00:03:40.942 "write": true, 00:03:40.942 "unmap": true, 00:03:40.942 "flush": true, 00:03:40.942 "reset": true, 00:03:40.942 "nvme_admin": false, 00:03:40.942 "nvme_io": false, 00:03:40.942 "nvme_io_md": false, 00:03:40.942 "write_zeroes": true, 00:03:40.942 "zcopy": true, 00:03:40.942 "get_zone_info": false, 00:03:40.942 "zone_management": false, 00:03:40.942 "zone_append": false, 00:03:40.942 "compare": false, 00:03:40.942 "compare_and_write": false, 00:03:40.942 "abort": true, 00:03:40.942 "seek_hole": false, 00:03:40.942 "seek_data": false, 00:03:40.942 "copy": true, 00:03:40.942 "nvme_iov_md": false 00:03:40.942 }, 00:03:40.942 "memory_domains": [ 00:03:40.942 { 00:03:40.942 "dma_device_id": "system", 00:03:40.942 "dma_device_type": 1 00:03:40.942 }, 00:03:40.942 { 00:03:40.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:40.942 "dma_device_type": 2 00:03:40.942 } 00:03:40.942 ], 00:03:40.942 "driver_specific": {} 00:03:40.942 }, 00:03:40.942 { 00:03:40.942 "name": "Passthru0", 00:03:40.942 "aliases": [ 00:03:40.942 "66be1582-0ff6-5e53-a184-7adc35297679" 00:03:40.942 ], 00:03:40.942 "product_name": "passthru", 00:03:40.942 "block_size": 512, 00:03:40.942 "num_blocks": 16384, 00:03:40.942 "uuid": "66be1582-0ff6-5e53-a184-7adc35297679", 00:03:40.942 "assigned_rate_limits": { 00:03:40.942 "rw_ios_per_sec": 0, 00:03:40.942 "rw_mbytes_per_sec": 0, 00:03:40.942 "r_mbytes_per_sec": 0, 00:03:40.942 "w_mbytes_per_sec": 0 00:03:40.942 }, 00:03:40.942 "claimed": false, 00:03:40.942 "zoned": false, 00:03:40.942 "supported_io_types": { 00:03:40.942 "read": true, 00:03:40.942 "write": true, 00:03:40.942 "unmap": true, 00:03:40.942 "flush": true, 00:03:40.942 "reset": true, 00:03:40.942 "nvme_admin": false, 00:03:40.942 "nvme_io": false, 00:03:40.942 "nvme_io_md": false, 00:03:40.942 "write_zeroes": true, 00:03:40.942 "zcopy": true, 00:03:40.942 "get_zone_info": false, 00:03:40.942 "zone_management": false, 00:03:40.942 "zone_append": false, 00:03:40.942 "compare": false, 00:03:40.942 "compare_and_write": false, 00:03:40.942 "abort": true, 00:03:40.942 "seek_hole": false, 00:03:40.942 "seek_data": false, 00:03:40.942 "copy": true, 00:03:40.942 "nvme_iov_md": false 00:03:40.942 }, 00:03:40.942 "memory_domains": [ 00:03:40.942 { 00:03:40.942 "dma_device_id": "system", 00:03:40.942 "dma_device_type": 1 00:03:40.942 }, 00:03:40.942 { 00:03:40.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:40.942 "dma_device_type": 2 00:03:40.942 } 00:03:40.942 ], 00:03:40.942 "driver_specific": { 00:03:40.942 "passthru": { 00:03:40.942 "name": "Passthru0", 00:03:40.942 "base_bdev_name": "Malloc2" 00:03:40.942 } 00:03:40.942 } 00:03:40.942 } 00:03:40.942 ]' 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:40.942 00:03:40.942 real 0m0.260s 00:03:40.942 user 0m0.165s 00:03:40.942 sys 0m0.027s 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:40.942 19:00:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.942 ************************************ 00:03:40.942 END TEST rpc_daemon_integrity 00:03:40.942 ************************************ 00:03:40.942 19:00:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:40.942 19:00:46 rpc -- rpc/rpc.sh@84 -- # killprocess 2464749 00:03:40.942 19:00:46 rpc -- common/autotest_common.sh@950 -- # '[' -z 2464749 ']' 00:03:40.942 19:00:46 rpc -- common/autotest_common.sh@954 -- # kill -0 2464749 00:03:40.942 19:00:46 rpc -- common/autotest_common.sh@955 -- # uname 00:03:40.942 19:00:46 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:40.942 19:00:46 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2464749 00:03:40.942 19:00:46 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:40.942 19:00:46 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:40.942 19:00:46 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2464749' 00:03:40.942 killing process with pid 2464749 00:03:40.942 19:00:46 rpc -- common/autotest_common.sh@969 -- # kill 2464749 00:03:40.943 19:00:46 rpc -- common/autotest_common.sh@974 -- # wait 2464749 00:03:41.202 00:03:41.202 real 0m2.546s 00:03:41.202 user 0m3.334s 00:03:41.202 sys 0m0.665s 00:03:41.202 19:00:47 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:41.202 19:00:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.202 ************************************ 00:03:41.202 END TEST rpc 00:03:41.202 ************************************ 00:03:41.460 19:00:47 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:41.460 19:00:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:41.460 19:00:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:41.460 19:00:47 -- common/autotest_common.sh@10 -- # set +x 00:03:41.460 ************************************ 00:03:41.460 START TEST skip_rpc 00:03:41.460 ************************************ 00:03:41.460 19:00:47 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:41.460 * Looking for test storage... 00:03:41.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:41.460 19:00:47 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:41.460 19:00:47 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:41.460 19:00:47 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:41.460 19:00:47 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:41.460 19:00:47 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:41.460 19:00:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.460 ************************************ 00:03:41.460 START TEST skip_rpc 00:03:41.460 ************************************ 00:03:41.460 19:00:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:03:41.460 19:00:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2465180 00:03:41.460 19:00:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:41.460 19:00:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:41.460 19:00:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:41.460 [2024-07-24 19:00:47.388275] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:03:41.460 [2024-07-24 19:00:47.388377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2465180 ] 00:03:41.460 EAL: No free 2048 kB hugepages reported on node 1 00:03:41.460 [2024-07-24 19:00:47.448983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.718 [2024-07-24 19:00:47.569250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2465180 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2465180 ']' 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2465180 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2465180 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2465180' 00:03:46.978 killing process with pid 2465180 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2465180 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2465180 00:03:46.978 00:03:46.978 real 0m5.366s 00:03:46.978 user 0m5.066s 00:03:46.978 sys 0m0.291s 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.978 19:00:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.978 ************************************ 00:03:46.978 END TEST skip_rpc 00:03:46.978 ************************************ 00:03:46.978 19:00:52 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:46.978 19:00:52 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.978 19:00:52 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.978 19:00:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.978 ************************************ 00:03:46.978 START TEST skip_rpc_with_json 00:03:46.978 ************************************ 00:03:46.978 19:00:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:03:46.978 19:00:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:46.978 19:00:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2465660 00:03:46.978 19:00:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:46.978 19:00:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:46.978 19:00:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2465660 00:03:46.978 19:00:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2465660 ']' 00:03:46.978 19:00:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:46.978 19:00:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:46.978 19:00:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:46.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:46.978 19:00:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:46.978 19:00:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:46.978 [2024-07-24 19:00:52.814705] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:03:46.978 [2024-07-24 19:00:52.814800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2465660 ] 00:03:46.978 EAL: No free 2048 kB hugepages reported on node 1 00:03:46.978 [2024-07-24 19:00:52.874454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:46.978 [2024-07-24 19:00:52.991795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.236 19:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:47.236 19:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:03:47.236 19:00:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:47.236 19:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.236 19:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:47.236 [2024-07-24 19:00:53.233840] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:47.236 request: 00:03:47.236 { 00:03:47.236 "trtype": "tcp", 00:03:47.236 "method": "nvmf_get_transports", 00:03:47.236 "req_id": 1 00:03:47.236 } 00:03:47.236 Got JSON-RPC error response 00:03:47.236 response: 00:03:47.236 { 00:03:47.236 "code": -19, 00:03:47.236 "message": "No such device" 00:03:47.236 } 00:03:47.236 19:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:47.236 19:00:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:47.236 19:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.236 19:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:47.236 [2024-07-24 19:00:53.241995] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:47.236 19:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.236 19:00:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:47.236 19:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.236 19:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:47.493 19:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.493 19:00:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:47.493 { 00:03:47.493 "subsystems": [ 00:03:47.493 { 00:03:47.493 "subsystem": "vfio_user_target", 00:03:47.493 "config": null 00:03:47.493 }, 00:03:47.493 { 00:03:47.493 "subsystem": "keyring", 00:03:47.493 "config": [] 00:03:47.493 }, 00:03:47.493 { 00:03:47.493 "subsystem": "iobuf", 00:03:47.493 "config": [ 00:03:47.493 { 00:03:47.493 "method": "iobuf_set_options", 00:03:47.493 "params": { 00:03:47.493 "small_pool_count": 8192, 00:03:47.493 "large_pool_count": 1024, 00:03:47.493 "small_bufsize": 8192, 00:03:47.493 "large_bufsize": 135168 00:03:47.493 } 00:03:47.493 } 00:03:47.493 ] 00:03:47.493 }, 00:03:47.493 { 00:03:47.493 "subsystem": "sock", 00:03:47.493 "config": [ 00:03:47.494 { 00:03:47.494 "method": "sock_set_default_impl", 00:03:47.494 "params": { 00:03:47.494 "impl_name": "posix" 00:03:47.494 } 00:03:47.494 }, 00:03:47.494 { 00:03:47.494 "method": "sock_impl_set_options", 00:03:47.494 "params": { 00:03:47.494 "impl_name": "ssl", 00:03:47.494 "recv_buf_size": 4096, 00:03:47.494 "send_buf_size": 4096, 00:03:47.494 "enable_recv_pipe": true, 00:03:47.494 "enable_quickack": false, 00:03:47.494 "enable_placement_id": 0, 00:03:47.494 "enable_zerocopy_send_server": true, 00:03:47.494 "enable_zerocopy_send_client": false, 00:03:47.494 "zerocopy_threshold": 0, 00:03:47.494 "tls_version": 0, 00:03:47.494 "enable_ktls": false 00:03:47.494 } 00:03:47.494 }, 00:03:47.494 { 00:03:47.494 "method": "sock_impl_set_options", 00:03:47.494 "params": { 00:03:47.494 "impl_name": "posix", 00:03:47.494 "recv_buf_size": 2097152, 00:03:47.494 "send_buf_size": 2097152, 00:03:47.494 "enable_recv_pipe": true, 00:03:47.494 "enable_quickack": false, 00:03:47.494 "enable_placement_id": 0, 00:03:47.494 "enable_zerocopy_send_server": true, 00:03:47.494 "enable_zerocopy_send_client": false, 00:03:47.494 "zerocopy_threshold": 0, 00:03:47.494 "tls_version": 0, 00:03:47.494 "enable_ktls": false 00:03:47.494 } 00:03:47.494 } 00:03:47.494 ] 00:03:47.494 }, 00:03:47.494 { 00:03:47.494 "subsystem": "vmd", 00:03:47.494 "config": [] 00:03:47.494 }, 00:03:47.494 { 00:03:47.494 "subsystem": "accel", 00:03:47.494 "config": [ 00:03:47.494 { 00:03:47.494 "method": "accel_set_options", 00:03:47.494 "params": { 00:03:47.494 "small_cache_size": 128, 00:03:47.494 "large_cache_size": 16, 00:03:47.494 "task_count": 2048, 00:03:47.494 "sequence_count": 2048, 00:03:47.494 "buf_count": 2048 00:03:47.494 } 00:03:47.494 } 00:03:47.494 ] 00:03:47.494 }, 00:03:47.494 { 00:03:47.494 "subsystem": "bdev", 00:03:47.494 "config": [ 00:03:47.494 { 00:03:47.494 "method": "bdev_set_options", 00:03:47.494 "params": { 00:03:47.494 "bdev_io_pool_size": 65535, 00:03:47.494 "bdev_io_cache_size": 256, 00:03:47.494 "bdev_auto_examine": true, 00:03:47.494 "iobuf_small_cache_size": 128, 00:03:47.494 "iobuf_large_cache_size": 16 00:03:47.494 } 00:03:47.494 }, 00:03:47.494 { 00:03:47.494 "method": "bdev_raid_set_options", 00:03:47.494 "params": { 00:03:47.494 "process_window_size_kb": 1024, 00:03:47.494 "process_max_bandwidth_mb_sec": 0 00:03:47.494 } 00:03:47.494 }, 00:03:47.494 { 00:03:47.494 "method": "bdev_iscsi_set_options", 00:03:47.494 "params": { 00:03:47.494 "timeout_sec": 30 00:03:47.494 } 00:03:47.494 }, 00:03:47.494 { 00:03:47.494 "method": "bdev_nvme_set_options", 00:03:47.494 "params": { 00:03:47.494 "action_on_timeout": "none", 00:03:47.494 "timeout_us": 0, 00:03:47.494 "timeout_admin_us": 0, 00:03:47.494 "keep_alive_timeout_ms": 10000, 00:03:47.494 "arbitration_burst": 0, 00:03:47.494 "low_priority_weight": 0, 00:03:47.494 "medium_priority_weight": 0, 00:03:47.494 "high_priority_weight": 0, 00:03:47.494 "nvme_adminq_poll_period_us": 10000, 00:03:47.494 "nvme_ioq_poll_period_us": 0, 00:03:47.494 "io_queue_requests": 0, 00:03:47.494 "delay_cmd_submit": true, 00:03:47.494 "transport_retry_count": 4, 00:03:47.494 "bdev_retry_count": 3, 00:03:47.494 "transport_ack_timeout": 0, 00:03:47.494 "ctrlr_loss_timeout_sec": 0, 00:03:47.494 "reconnect_delay_sec": 0, 00:03:47.494 "fast_io_fail_timeout_sec": 0, 00:03:47.494 "disable_auto_failback": false, 00:03:47.494 "generate_uuids": false, 00:03:47.494 "transport_tos": 0, 00:03:47.494 "nvme_error_stat": false, 00:03:47.494 "rdma_srq_size": 0, 00:03:47.494 "io_path_stat": false, 00:03:47.494 "allow_accel_sequence": false, 00:03:47.494 "rdma_max_cq_size": 0, 00:03:47.494 "rdma_cm_event_timeout_ms": 0, 00:03:47.494 "dhchap_digests": [ 00:03:47.494 "sha256", 00:03:47.494 "sha384", 00:03:47.494 "sha512" 00:03:47.494 ], 00:03:47.494 "dhchap_dhgroups": [ 00:03:47.494 "null", 00:03:47.494 "ffdhe2048", 00:03:47.494 "ffdhe3072", 00:03:47.494 "ffdhe4096", 00:03:47.494 "ffdhe6144", 00:03:47.494 "ffdhe8192" 00:03:47.494 ] 00:03:47.494 } 00:03:47.494 }, 00:03:47.494 { 00:03:47.494 "method": "bdev_nvme_set_hotplug", 00:03:47.494 "params": { 00:03:47.494 "period_us": 100000, 00:03:47.494 "enable": false 00:03:47.494 } 00:03:47.494 }, 00:03:47.494 { 00:03:47.494 "method": "bdev_wait_for_examine" 00:03:47.494 } 00:03:47.494 ] 00:03:47.494 }, 00:03:47.494 { 00:03:47.494 "subsystem": "scsi", 00:03:47.494 "config": null 00:03:47.494 }, 00:03:47.494 { 00:03:47.494 "subsystem": "scheduler", 00:03:47.494 "config": [ 00:03:47.494 { 00:03:47.494 "method": "framework_set_scheduler", 00:03:47.494 "params": { 00:03:47.494 "name": "static" 00:03:47.494 } 00:03:47.494 } 00:03:47.494 ] 00:03:47.494 }, 00:03:47.494 { 00:03:47.494 "subsystem": "vhost_scsi", 00:03:47.494 "config": [] 00:03:47.494 }, 00:03:47.494 { 00:03:47.494 "subsystem": "vhost_blk", 00:03:47.494 "config": [] 00:03:47.494 }, 00:03:47.494 { 00:03:47.494 "subsystem": "ublk", 00:03:47.494 "config": [] 00:03:47.494 }, 00:03:47.494 { 00:03:47.494 "subsystem": "nbd", 00:03:47.494 "config": [] 00:03:47.494 }, 00:03:47.494 { 00:03:47.494 "subsystem": "nvmf", 00:03:47.494 "config": [ 00:03:47.494 { 00:03:47.494 "method": "nvmf_set_config", 00:03:47.494 "params": { 00:03:47.494 "discovery_filter": "match_any", 00:03:47.494 "admin_cmd_passthru": { 00:03:47.494 "identify_ctrlr": false 00:03:47.494 } 00:03:47.494 } 00:03:47.494 }, 00:03:47.494 { 00:03:47.494 "method": "nvmf_set_max_subsystems", 00:03:47.494 "params": { 00:03:47.494 "max_subsystems": 1024 00:03:47.494 } 00:03:47.494 }, 00:03:47.494 { 00:03:47.494 "method": "nvmf_set_crdt", 00:03:47.494 "params": { 00:03:47.494 "crdt1": 0, 00:03:47.494 "crdt2": 0, 00:03:47.494 "crdt3": 0 00:03:47.494 } 00:03:47.494 }, 00:03:47.494 { 00:03:47.494 "method": "nvmf_create_transport", 00:03:47.494 "params": { 00:03:47.494 "trtype": "TCP", 00:03:47.494 "max_queue_depth": 128, 00:03:47.494 "max_io_qpairs_per_ctrlr": 127, 00:03:47.494 "in_capsule_data_size": 4096, 00:03:47.494 "max_io_size": 131072, 00:03:47.494 "io_unit_size": 131072, 00:03:47.494 "max_aq_depth": 128, 00:03:47.494 "num_shared_buffers": 511, 00:03:47.494 "buf_cache_size": 4294967295, 00:03:47.494 "dif_insert_or_strip": false, 00:03:47.494 "zcopy": false, 00:03:47.494 "c2h_success": true, 00:03:47.494 "sock_priority": 0, 00:03:47.494 "abort_timeout_sec": 1, 00:03:47.494 "ack_timeout": 0, 00:03:47.494 "data_wr_pool_size": 0 00:03:47.494 } 00:03:47.494 } 00:03:47.494 ] 00:03:47.494 }, 00:03:47.494 { 00:03:47.494 "subsystem": "iscsi", 00:03:47.494 "config": [ 00:03:47.494 { 00:03:47.494 "method": "iscsi_set_options", 00:03:47.494 "params": { 00:03:47.494 "node_base": "iqn.2016-06.io.spdk", 00:03:47.494 "max_sessions": 128, 00:03:47.494 "max_connections_per_session": 2, 00:03:47.494 "max_queue_depth": 64, 00:03:47.494 "default_time2wait": 2, 00:03:47.494 "default_time2retain": 20, 00:03:47.494 "first_burst_length": 8192, 00:03:47.494 "immediate_data": true, 00:03:47.494 "allow_duplicated_isid": false, 00:03:47.494 "error_recovery_level": 0, 00:03:47.494 "nop_timeout": 60, 00:03:47.494 "nop_in_interval": 30, 00:03:47.494 "disable_chap": false, 00:03:47.494 "require_chap": false, 00:03:47.494 "mutual_chap": false, 00:03:47.494 "chap_group": 0, 00:03:47.494 "max_large_datain_per_connection": 64, 00:03:47.494 "max_r2t_per_connection": 4, 00:03:47.494 "pdu_pool_size": 36864, 00:03:47.494 "immediate_data_pool_size": 16384, 00:03:47.494 "data_out_pool_size": 2048 00:03:47.494 } 00:03:47.494 } 00:03:47.494 ] 00:03:47.494 } 00:03:47.494 ] 00:03:47.494 } 00:03:47.494 19:00:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:47.494 19:00:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2465660 00:03:47.494 19:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2465660 ']' 00:03:47.494 19:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2465660 00:03:47.494 19:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:47.494 19:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:47.494 19:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2465660 00:03:47.494 19:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:47.494 19:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:47.494 19:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2465660' 00:03:47.494 killing process with pid 2465660 00:03:47.494 19:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2465660 00:03:47.494 19:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2465660 00:03:48.059 19:00:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2465770 00:03:48.059 19:00:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:48.059 19:00:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:53.319 19:00:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2465770 00:03:53.319 19:00:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2465770 ']' 00:03:53.319 19:00:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2465770 00:03:53.319 19:00:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:53.319 19:00:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:53.319 19:00:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2465770 00:03:53.319 19:00:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:53.319 19:00:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:53.319 19:00:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2465770' 00:03:53.319 killing process with pid 2465770 00:03:53.319 19:00:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2465770 00:03:53.319 19:00:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2465770 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:53.319 00:03:53.319 real 0m6.374s 00:03:53.319 user 0m6.087s 00:03:53.319 sys 0m0.616s 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:53.319 ************************************ 00:03:53.319 END TEST skip_rpc_with_json 00:03:53.319 ************************************ 00:03:53.319 19:00:59 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:53.319 19:00:59 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:53.319 19:00:59 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:53.319 19:00:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.319 ************************************ 00:03:53.319 START TEST skip_rpc_with_delay 00:03:53.319 ************************************ 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:53.319 [2024-07-24 19:00:59.244094] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:53.319 [2024-07-24 19:00:59.244227] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:53.319 00:03:53.319 real 0m0.079s 00:03:53.319 user 0m0.052s 00:03:53.319 sys 0m0.026s 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:53.319 19:00:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:53.319 ************************************ 00:03:53.319 END TEST skip_rpc_with_delay 00:03:53.319 ************************************ 00:03:53.319 19:00:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:53.319 19:00:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:53.319 19:00:59 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:53.319 19:00:59 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:53.319 19:00:59 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:53.319 19:00:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.319 ************************************ 00:03:53.319 START TEST exit_on_failed_rpc_init 00:03:53.319 ************************************ 00:03:53.319 19:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:03:53.319 19:00:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2466328 00:03:53.319 19:00:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:53.319 19:00:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2466328 00:03:53.319 19:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2466328 ']' 00:03:53.319 19:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:53.319 19:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:53.319 19:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:53.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:53.319 19:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:53.319 19:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:53.577 [2024-07-24 19:00:59.377367] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:03:53.577 [2024-07-24 19:00:59.377475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2466328 ] 00:03:53.577 EAL: No free 2048 kB hugepages reported on node 1 00:03:53.577 [2024-07-24 19:00:59.436803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.577 [2024-07-24 19:00:59.553510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.834 19:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:53.834 19:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:03:53.835 19:00:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:53.835 19:00:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:53.835 19:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:03:53.835 19:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:53.835 19:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:53.835 19:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:53.835 19:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:53.835 19:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:53.835 19:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:53.835 19:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:53.835 19:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:53.835 19:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:53.835 19:00:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:53.835 [2024-07-24 19:00:59.841768] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:03:53.835 [2024-07-24 19:00:59.841867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2466422 ] 00:03:54.092 EAL: No free 2048 kB hugepages reported on node 1 00:03:54.092 [2024-07-24 19:00:59.903624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.092 [2024-07-24 19:01:00.028545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:03:54.092 [2024-07-24 19:01:00.028696] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:54.092 [2024-07-24 19:01:00.028730] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:54.092 [2024-07-24 19:01:00.028751] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:54.350 19:01:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:03:54.350 19:01:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:54.350 19:01:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:03:54.350 19:01:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:03:54.350 19:01:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:03:54.350 19:01:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:54.350 19:01:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:54.350 19:01:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2466328 00:03:54.350 19:01:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2466328 ']' 00:03:54.350 19:01:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2466328 00:03:54.351 19:01:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:03:54.351 19:01:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:54.351 19:01:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2466328 00:03:54.351 19:01:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:54.351 19:01:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:54.351 19:01:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2466328' 00:03:54.351 killing process with pid 2466328 00:03:54.351 19:01:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2466328 00:03:54.351 19:01:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2466328 00:03:54.609 00:03:54.609 real 0m1.189s 00:03:54.609 user 0m1.460s 00:03:54.609 sys 0m0.402s 00:03:54.609 19:01:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.609 19:01:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:54.609 ************************************ 00:03:54.609 END TEST exit_on_failed_rpc_init 00:03:54.609 ************************************ 00:03:54.609 19:01:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:54.609 00:03:54.609 real 0m13.291s 00:03:54.609 user 0m12.781s 00:03:54.609 sys 0m1.514s 00:03:54.609 19:01:00 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.609 19:01:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.609 ************************************ 00:03:54.609 END TEST skip_rpc 00:03:54.609 ************************************ 00:03:54.609 19:01:00 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:54.609 19:01:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:54.609 19:01:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.609 19:01:00 -- common/autotest_common.sh@10 -- # set +x 00:03:54.609 ************************************ 00:03:54.609 START TEST rpc_client 00:03:54.609 ************************************ 00:03:54.609 19:01:00 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:54.869 * Looking for test storage... 00:03:54.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:54.869 19:01:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:54.869 OK 00:03:54.869 19:01:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:54.869 00:03:54.869 real 0m0.072s 00:03:54.869 user 0m0.026s 00:03:54.869 sys 0m0.050s 00:03:54.869 19:01:00 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.869 19:01:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:54.869 ************************************ 00:03:54.869 END TEST rpc_client 00:03:54.869 ************************************ 00:03:54.869 19:01:00 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:54.869 19:01:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:54.869 19:01:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.869 19:01:00 -- common/autotest_common.sh@10 -- # set +x 00:03:54.869 ************************************ 00:03:54.869 START TEST json_config 00:03:54.869 ************************************ 00:03:54.869 19:01:00 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:54.869 19:01:00 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:54.869 19:01:00 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:54.869 19:01:00 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:54.869 19:01:00 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:54.869 19:01:00 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.869 19:01:00 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.869 19:01:00 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.869 19:01:00 json_config -- paths/export.sh@5 -- # export PATH 00:03:54.869 19:01:00 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@47 -- # : 0 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:54.869 19:01:00 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:54.870 19:01:00 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:54.870 19:01:00 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:54.870 19:01:00 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:54.870 19:01:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:54.870 19:01:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:54.870 19:01:00 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:54.870 19:01:00 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:54.870 19:01:00 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:54.870 19:01:00 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:54.870 19:01:00 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:54.870 19:01:00 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:54.870 19:01:00 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:54.870 19:01:00 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:54.870 19:01:00 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:54.870 19:01:00 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:54.870 19:01:00 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:54.870 19:01:00 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:03:54.870 INFO: JSON configuration test init 00:03:54.870 19:01:00 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:03:54.870 19:01:00 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:03:54.870 19:01:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:54.870 19:01:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.870 19:01:00 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:03:54.870 19:01:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:54.870 19:01:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.870 19:01:00 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:03:54.870 19:01:00 json_config -- json_config/common.sh@9 -- # local app=target 00:03:54.870 19:01:00 json_config -- json_config/common.sh@10 -- # shift 00:03:54.870 19:01:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:54.870 19:01:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:54.870 19:01:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:54.870 19:01:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:54.870 19:01:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:54.870 19:01:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2466632 00:03:54.870 19:01:00 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:54.870 19:01:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:54.870 Waiting for target to run... 00:03:54.870 19:01:00 json_config -- json_config/common.sh@25 -- # waitforlisten 2466632 /var/tmp/spdk_tgt.sock 00:03:54.870 19:01:00 json_config -- common/autotest_common.sh@831 -- # '[' -z 2466632 ']' 00:03:54.870 19:01:00 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:54.870 19:01:00 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:54.870 19:01:00 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:54.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:54.870 19:01:00 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:54.870 19:01:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.870 [2024-07-24 19:01:00.826114] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:03:54.870 [2024-07-24 19:01:00.826219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2466632 ] 00:03:54.870 EAL: No free 2048 kB hugepages reported on node 1 00:03:55.436 [2024-07-24 19:01:01.173707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.436 [2024-07-24 19:01:01.269267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.003 19:01:01 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:56.003 19:01:01 json_config -- common/autotest_common.sh@864 -- # return 0 00:03:56.003 19:01:01 json_config -- json_config/common.sh@26 -- # echo '' 00:03:56.003 00:03:56.003 19:01:01 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:03:56.003 19:01:01 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:03:56.003 19:01:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:56.003 19:01:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.003 19:01:01 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:03:56.003 19:01:01 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:03:56.003 19:01:01 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:56.003 19:01:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.003 19:01:01 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:56.003 19:01:01 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:03:56.003 19:01:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:59.286 19:01:05 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:03:59.286 19:01:05 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:59.286 19:01:05 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:59.286 19:01:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.286 19:01:05 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:59.286 19:01:05 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:59.286 19:01:05 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:59.286 19:01:05 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:03:59.286 19:01:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:59.286 19:01:05 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:03:59.544 19:01:05 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:59.544 19:01:05 json_config -- json_config/json_config.sh@48 -- # local get_types 00:03:59.544 19:01:05 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:03:59.544 19:01:05 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:03:59.544 19:01:05 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:03:59.544 19:01:05 json_config -- json_config/json_config.sh@51 -- # sort 00:03:59.544 19:01:05 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:03:59.544 19:01:05 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:03:59.544 19:01:05 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:03:59.544 19:01:05 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:03:59.544 19:01:05 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:59.544 19:01:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.545 19:01:05 json_config -- json_config/json_config.sh@59 -- # return 0 00:03:59.545 19:01:05 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:03:59.545 19:01:05 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:03:59.545 19:01:05 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:03:59.545 19:01:05 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:03:59.545 19:01:05 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:03:59.545 19:01:05 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:03:59.545 19:01:05 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:59.545 19:01:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.545 19:01:05 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:59.545 19:01:05 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:03:59.545 19:01:05 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:03:59.545 19:01:05 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:59.545 19:01:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:59.801 MallocForNvmf0 00:03:59.801 19:01:05 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:59.801 19:01:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:00.058 MallocForNvmf1 00:04:00.058 19:01:05 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:00.058 19:01:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:00.315 [2024-07-24 19:01:06.233047] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:00.315 19:01:06 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:00.316 19:01:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:00.573 19:01:06 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:00.573 19:01:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:00.830 19:01:06 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:00.830 19:01:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:01.088 19:01:06 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:01.089 19:01:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:01.348 [2024-07-24 19:01:07.216193] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:01.348 19:01:07 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:04:01.348 19:01:07 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:01.348 19:01:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.348 19:01:07 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:04:01.348 19:01:07 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:01.348 19:01:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.348 19:01:07 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:04:01.348 19:01:07 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:01.348 19:01:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:01.606 MallocBdevForConfigChangeCheck 00:04:01.606 19:01:07 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:04:01.606 19:01:07 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:01.606 19:01:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.606 19:01:07 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:04:01.606 19:01:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:02.171 19:01:07 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:04:02.171 INFO: shutting down applications... 00:04:02.171 19:01:07 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:04:02.171 19:01:07 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:04:02.171 19:01:07 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:04:02.171 19:01:07 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:03.541 Calling clear_iscsi_subsystem 00:04:03.541 Calling clear_nvmf_subsystem 00:04:03.541 Calling clear_nbd_subsystem 00:04:03.541 Calling clear_ublk_subsystem 00:04:03.541 Calling clear_vhost_blk_subsystem 00:04:03.541 Calling clear_vhost_scsi_subsystem 00:04:03.541 Calling clear_bdev_subsystem 00:04:03.541 19:01:09 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:03.541 19:01:09 json_config -- json_config/json_config.sh@347 -- # count=100 00:04:03.541 19:01:09 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:04:03.541 19:01:09 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:03.541 19:01:09 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:03.542 19:01:09 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:04.106 19:01:09 json_config -- json_config/json_config.sh@349 -- # break 00:04:04.106 19:01:09 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:04:04.106 19:01:09 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:04:04.106 19:01:09 json_config -- json_config/common.sh@31 -- # local app=target 00:04:04.106 19:01:09 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:04.106 19:01:09 json_config -- json_config/common.sh@35 -- # [[ -n 2466632 ]] 00:04:04.106 19:01:09 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2466632 00:04:04.106 19:01:09 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:04.106 19:01:09 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:04.106 19:01:09 json_config -- json_config/common.sh@41 -- # kill -0 2466632 00:04:04.106 19:01:09 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:04.676 19:01:10 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:04.676 19:01:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:04.676 19:01:10 json_config -- json_config/common.sh@41 -- # kill -0 2466632 00:04:04.676 19:01:10 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:04.676 19:01:10 json_config -- json_config/common.sh@43 -- # break 00:04:04.676 19:01:10 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:04.676 19:01:10 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:04.676 SPDK target shutdown done 00:04:04.676 19:01:10 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:04:04.676 INFO: relaunching applications... 00:04:04.676 19:01:10 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:04.676 19:01:10 json_config -- json_config/common.sh@9 -- # local app=target 00:04:04.676 19:01:10 json_config -- json_config/common.sh@10 -- # shift 00:04:04.676 19:01:10 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:04.676 19:01:10 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:04.676 19:01:10 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:04.676 19:01:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.676 19:01:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.677 19:01:10 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2467578 00:04:04.677 19:01:10 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:04.677 19:01:10 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:04.677 Waiting for target to run... 00:04:04.677 19:01:10 json_config -- json_config/common.sh@25 -- # waitforlisten 2467578 /var/tmp/spdk_tgt.sock 00:04:04.677 19:01:10 json_config -- common/autotest_common.sh@831 -- # '[' -z 2467578 ']' 00:04:04.677 19:01:10 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:04.677 19:01:10 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:04.677 19:01:10 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:04.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:04.677 19:01:10 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:04.677 19:01:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.677 [2024-07-24 19:01:10.524939] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:04.677 [2024-07-24 19:01:10.525030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2467578 ] 00:04:04.677 EAL: No free 2048 kB hugepages reported on node 1 00:04:04.936 [2024-07-24 19:01:10.825064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.936 [2024-07-24 19:01:10.920967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.216 [2024-07-24 19:01:13.945560] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:08.216 [2024-07-24 19:01:13.977936] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:08.216 19:01:14 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:08.216 19:01:14 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:08.216 19:01:14 json_config -- json_config/common.sh@26 -- # echo '' 00:04:08.216 00:04:08.216 19:01:14 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:04:08.216 19:01:14 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:08.216 INFO: Checking if target configuration is the same... 00:04:08.216 19:01:14 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:08.216 19:01:14 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:04:08.216 19:01:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:08.216 + '[' 2 -ne 2 ']' 00:04:08.216 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:08.216 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:08.216 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.216 +++ basename /dev/fd/62 00:04:08.216 ++ mktemp /tmp/62.XXX 00:04:08.216 + tmp_file_1=/tmp/62.JPX 00:04:08.216 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:08.216 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:08.216 + tmp_file_2=/tmp/spdk_tgt_config.json.fBE 00:04:08.216 + ret=0 00:04:08.216 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:08.474 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:08.731 + diff -u /tmp/62.JPX /tmp/spdk_tgt_config.json.fBE 00:04:08.731 + echo 'INFO: JSON config files are the same' 00:04:08.731 INFO: JSON config files are the same 00:04:08.731 + rm /tmp/62.JPX /tmp/spdk_tgt_config.json.fBE 00:04:08.731 + exit 0 00:04:08.731 19:01:14 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:04:08.731 19:01:14 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:08.731 INFO: changing configuration and checking if this can be detected... 00:04:08.731 19:01:14 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:08.731 19:01:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:08.989 19:01:14 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:08.989 19:01:14 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:04:08.989 19:01:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:08.989 + '[' 2 -ne 2 ']' 00:04:08.989 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:08.989 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:08.989 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.989 +++ basename /dev/fd/62 00:04:08.989 ++ mktemp /tmp/62.XXX 00:04:08.989 + tmp_file_1=/tmp/62.dgN 00:04:08.989 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:08.989 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:08.989 + tmp_file_2=/tmp/spdk_tgt_config.json.r3J 00:04:08.989 + ret=0 00:04:08.989 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:09.257 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:09.523 + diff -u /tmp/62.dgN /tmp/spdk_tgt_config.json.r3J 00:04:09.523 + ret=1 00:04:09.523 + echo '=== Start of file: /tmp/62.dgN ===' 00:04:09.523 + cat /tmp/62.dgN 00:04:09.523 + echo '=== End of file: /tmp/62.dgN ===' 00:04:09.523 + echo '' 00:04:09.523 + echo '=== Start of file: /tmp/spdk_tgt_config.json.r3J ===' 00:04:09.523 + cat /tmp/spdk_tgt_config.json.r3J 00:04:09.523 + echo '=== End of file: /tmp/spdk_tgt_config.json.r3J ===' 00:04:09.523 + echo '' 00:04:09.523 + rm /tmp/62.dgN /tmp/spdk_tgt_config.json.r3J 00:04:09.523 + exit 1 00:04:09.523 19:01:15 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:04:09.523 INFO: configuration change detected. 00:04:09.523 19:01:15 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:04:09.523 19:01:15 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:04:09.523 19:01:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:09.523 19:01:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.523 19:01:15 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:04:09.523 19:01:15 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:04:09.523 19:01:15 json_config -- json_config/json_config.sh@321 -- # [[ -n 2467578 ]] 00:04:09.523 19:01:15 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:04:09.523 19:01:15 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:04:09.523 19:01:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:09.523 19:01:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.523 19:01:15 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:04:09.523 19:01:15 json_config -- json_config/json_config.sh@197 -- # uname -s 00:04:09.523 19:01:15 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:04:09.523 19:01:15 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:04:09.523 19:01:15 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:04:09.523 19:01:15 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:04:09.523 19:01:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:09.523 19:01:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.523 19:01:15 json_config -- json_config/json_config.sh@327 -- # killprocess 2467578 00:04:09.523 19:01:15 json_config -- common/autotest_common.sh@950 -- # '[' -z 2467578 ']' 00:04:09.523 19:01:15 json_config -- common/autotest_common.sh@954 -- # kill -0 2467578 00:04:09.523 19:01:15 json_config -- common/autotest_common.sh@955 -- # uname 00:04:09.523 19:01:15 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:09.523 19:01:15 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2467578 00:04:09.523 19:01:15 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:09.523 19:01:15 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:09.523 19:01:15 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2467578' 00:04:09.523 killing process with pid 2467578 00:04:09.523 19:01:15 json_config -- common/autotest_common.sh@969 -- # kill 2467578 00:04:09.523 19:01:15 json_config -- common/autotest_common.sh@974 -- # wait 2467578 00:04:10.936 19:01:16 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:10.936 19:01:16 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:04:10.936 19:01:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:10.936 19:01:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.936 19:01:16 json_config -- json_config/json_config.sh@332 -- # return 0 00:04:10.936 19:01:16 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:04:10.936 INFO: Success 00:04:10.936 00:04:10.936 real 0m16.233s 00:04:10.936 user 0m18.796s 00:04:10.936 sys 0m1.852s 00:04:10.936 19:01:16 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.936 19:01:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.936 ************************************ 00:04:10.936 END TEST json_config 00:04:10.936 ************************************ 00:04:11.194 19:01:16 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:11.194 19:01:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:11.194 19:01:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:11.194 19:01:16 -- common/autotest_common.sh@10 -- # set +x 00:04:11.194 ************************************ 00:04:11.194 START TEST json_config_extra_key 00:04:11.194 ************************************ 00:04:11.194 19:01:16 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:11.194 19:01:17 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:11.194 19:01:17 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:11.194 19:01:17 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:11.194 19:01:17 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:11.194 19:01:17 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:11.194 19:01:17 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:11.194 19:01:17 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:11.194 19:01:17 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:11.194 19:01:17 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:11.194 19:01:17 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:11.194 19:01:17 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:11.194 19:01:17 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:11.194 19:01:17 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:04:11.195 19:01:17 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:04:11.195 19:01:17 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:11.195 19:01:17 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:11.195 19:01:17 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:11.195 19:01:17 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:11.195 19:01:17 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:11.195 19:01:17 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:11.195 19:01:17 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:11.195 19:01:17 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:11.195 19:01:17 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.195 19:01:17 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.195 19:01:17 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.195 19:01:17 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:11.195 19:01:17 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.195 19:01:17 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:11.195 19:01:17 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:11.195 19:01:17 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:11.195 19:01:17 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:11.195 19:01:17 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:11.195 19:01:17 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:11.195 19:01:17 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:11.195 19:01:17 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:11.195 19:01:17 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:11.195 19:01:17 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:11.195 19:01:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:11.195 19:01:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:11.195 19:01:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:11.195 19:01:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:11.195 19:01:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:11.195 19:01:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:11.195 19:01:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:11.195 19:01:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:11.195 19:01:17 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:11.195 19:01:17 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:11.195 INFO: launching applications... 00:04:11.195 19:01:17 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:11.195 19:01:17 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:11.195 19:01:17 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:11.195 19:01:17 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:11.195 19:01:17 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:11.195 19:01:17 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:11.195 19:01:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:11.195 19:01:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:11.195 19:01:17 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2468298 00:04:11.195 19:01:17 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:11.195 19:01:17 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:11.195 Waiting for target to run... 00:04:11.195 19:01:17 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2468298 /var/tmp/spdk_tgt.sock 00:04:11.195 19:01:17 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2468298 ']' 00:04:11.195 19:01:17 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:11.195 19:01:17 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:11.195 19:01:17 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:11.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:11.195 19:01:17 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:11.195 19:01:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:11.195 [2024-07-24 19:01:17.114313] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:11.195 [2024-07-24 19:01:17.114413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2468298 ] 00:04:11.195 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.453 [2024-07-24 19:01:17.418180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.712 [2024-07-24 19:01:17.514060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.280 19:01:18 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:12.280 19:01:18 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:12.280 19:01:18 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:12.280 00:04:12.280 19:01:18 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:12.280 INFO: shutting down applications... 00:04:12.280 19:01:18 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:12.280 19:01:18 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:12.280 19:01:18 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:12.280 19:01:18 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2468298 ]] 00:04:12.280 19:01:18 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2468298 00:04:12.280 19:01:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:12.280 19:01:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:12.280 19:01:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2468298 00:04:12.280 19:01:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:12.862 19:01:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:12.862 19:01:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:12.862 19:01:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2468298 00:04:12.863 19:01:18 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:12.863 19:01:18 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:12.863 19:01:18 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:12.863 19:01:18 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:12.863 SPDK target shutdown done 00:04:12.863 19:01:18 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:12.863 Success 00:04:12.863 00:04:12.863 real 0m1.659s 00:04:12.863 user 0m1.659s 00:04:12.863 sys 0m0.408s 00:04:12.863 19:01:18 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.863 19:01:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:12.863 ************************************ 00:04:12.863 END TEST json_config_extra_key 00:04:12.863 ************************************ 00:04:12.863 19:01:18 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:12.863 19:01:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:12.863 19:01:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.863 19:01:18 -- common/autotest_common.sh@10 -- # set +x 00:04:12.863 ************************************ 00:04:12.863 START TEST alias_rpc 00:04:12.863 ************************************ 00:04:12.863 19:01:18 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:12.863 * Looking for test storage... 00:04:12.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:12.863 19:01:18 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:12.863 19:01:18 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2468537 00:04:12.863 19:01:18 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2468537 00:04:12.863 19:01:18 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.863 19:01:18 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2468537 ']' 00:04:12.863 19:01:18 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.863 19:01:18 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:12.863 19:01:18 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.863 19:01:18 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:12.863 19:01:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.863 [2024-07-24 19:01:18.827328] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:12.863 [2024-07-24 19:01:18.827431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2468537 ] 00:04:12.863 EAL: No free 2048 kB hugepages reported on node 1 00:04:13.122 [2024-07-24 19:01:18.889953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.122 [2024-07-24 19:01:19.007024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.381 19:01:19 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:13.381 19:01:19 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:13.381 19:01:19 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:13.641 19:01:19 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2468537 00:04:13.641 19:01:19 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2468537 ']' 00:04:13.641 19:01:19 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2468537 00:04:13.641 19:01:19 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:13.641 19:01:19 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:13.641 19:01:19 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2468537 00:04:13.641 19:01:19 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:13.641 19:01:19 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:13.641 19:01:19 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2468537' 00:04:13.641 killing process with pid 2468537 00:04:13.641 19:01:19 alias_rpc -- common/autotest_common.sh@969 -- # kill 2468537 00:04:13.641 19:01:19 alias_rpc -- common/autotest_common.sh@974 -- # wait 2468537 00:04:14.213 00:04:14.213 real 0m1.201s 00:04:14.213 user 0m1.392s 00:04:14.213 sys 0m0.400s 00:04:14.213 19:01:19 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.213 19:01:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.213 ************************************ 00:04:14.213 END TEST alias_rpc 00:04:14.213 ************************************ 00:04:14.213 19:01:19 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:14.213 19:01:19 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:14.213 19:01:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.213 19:01:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.213 19:01:19 -- common/autotest_common.sh@10 -- # set +x 00:04:14.213 ************************************ 00:04:14.213 START TEST spdkcli_tcp 00:04:14.213 ************************************ 00:04:14.213 19:01:19 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:14.213 * Looking for test storage... 00:04:14.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:14.213 19:01:20 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:14.213 19:01:20 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:14.213 19:01:20 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:14.213 19:01:20 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:14.213 19:01:20 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:14.213 19:01:20 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:14.213 19:01:20 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:14.213 19:01:20 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:14.213 19:01:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:14.213 19:01:20 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2468694 00:04:14.213 19:01:20 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:14.213 19:01:20 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2468694 00:04:14.213 19:01:20 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2468694 ']' 00:04:14.213 19:01:20 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.213 19:01:20 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:14.213 19:01:20 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.213 19:01:20 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:14.213 19:01:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:14.213 [2024-07-24 19:01:20.080931] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:14.213 [2024-07-24 19:01:20.081036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2468694 ] 00:04:14.213 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.213 [2024-07-24 19:01:20.140898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:14.471 [2024-07-24 19:01:20.258876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.471 [2024-07-24 19:01:20.258944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.729 19:01:20 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:14.729 19:01:20 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:14.729 19:01:20 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2468790 00:04:14.729 19:01:20 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:14.729 19:01:20 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:14.988 [ 00:04:14.988 "bdev_malloc_delete", 00:04:14.988 "bdev_malloc_create", 00:04:14.988 "bdev_null_resize", 00:04:14.988 "bdev_null_delete", 00:04:14.988 "bdev_null_create", 00:04:14.988 "bdev_nvme_cuse_unregister", 00:04:14.988 "bdev_nvme_cuse_register", 00:04:14.988 "bdev_opal_new_user", 00:04:14.988 "bdev_opal_set_lock_state", 00:04:14.988 "bdev_opal_delete", 00:04:14.988 "bdev_opal_get_info", 00:04:14.988 "bdev_opal_create", 00:04:14.988 "bdev_nvme_opal_revert", 00:04:14.988 "bdev_nvme_opal_init", 00:04:14.988 "bdev_nvme_send_cmd", 00:04:14.988 "bdev_nvme_get_path_iostat", 00:04:14.988 "bdev_nvme_get_mdns_discovery_info", 00:04:14.988 "bdev_nvme_stop_mdns_discovery", 00:04:14.988 "bdev_nvme_start_mdns_discovery", 00:04:14.988 "bdev_nvme_set_multipath_policy", 00:04:14.988 "bdev_nvme_set_preferred_path", 00:04:14.988 "bdev_nvme_get_io_paths", 00:04:14.988 "bdev_nvme_remove_error_injection", 00:04:14.988 "bdev_nvme_add_error_injection", 00:04:14.988 "bdev_nvme_get_discovery_info", 00:04:14.988 "bdev_nvme_stop_discovery", 00:04:14.988 "bdev_nvme_start_discovery", 00:04:14.988 "bdev_nvme_get_controller_health_info", 00:04:14.988 "bdev_nvme_disable_controller", 00:04:14.988 "bdev_nvme_enable_controller", 00:04:14.988 "bdev_nvme_reset_controller", 00:04:14.988 "bdev_nvme_get_transport_statistics", 00:04:14.988 "bdev_nvme_apply_firmware", 00:04:14.988 "bdev_nvme_detach_controller", 00:04:14.988 "bdev_nvme_get_controllers", 00:04:14.988 "bdev_nvme_attach_controller", 00:04:14.988 "bdev_nvme_set_hotplug", 00:04:14.988 "bdev_nvme_set_options", 00:04:14.988 "bdev_passthru_delete", 00:04:14.988 "bdev_passthru_create", 00:04:14.988 "bdev_lvol_set_parent_bdev", 00:04:14.988 "bdev_lvol_set_parent", 00:04:14.988 "bdev_lvol_check_shallow_copy", 00:04:14.988 "bdev_lvol_start_shallow_copy", 00:04:14.988 "bdev_lvol_grow_lvstore", 00:04:14.988 "bdev_lvol_get_lvols", 00:04:14.988 "bdev_lvol_get_lvstores", 00:04:14.988 "bdev_lvol_delete", 00:04:14.988 "bdev_lvol_set_read_only", 00:04:14.988 "bdev_lvol_resize", 00:04:14.988 "bdev_lvol_decouple_parent", 00:04:14.988 "bdev_lvol_inflate", 00:04:14.988 "bdev_lvol_rename", 00:04:14.988 "bdev_lvol_clone_bdev", 00:04:14.988 "bdev_lvol_clone", 00:04:14.988 "bdev_lvol_snapshot", 00:04:14.988 "bdev_lvol_create", 00:04:14.988 "bdev_lvol_delete_lvstore", 00:04:14.988 "bdev_lvol_rename_lvstore", 00:04:14.988 "bdev_lvol_create_lvstore", 00:04:14.988 "bdev_raid_set_options", 00:04:14.988 "bdev_raid_remove_base_bdev", 00:04:14.988 "bdev_raid_add_base_bdev", 00:04:14.988 "bdev_raid_delete", 00:04:14.988 "bdev_raid_create", 00:04:14.988 "bdev_raid_get_bdevs", 00:04:14.988 "bdev_error_inject_error", 00:04:14.988 "bdev_error_delete", 00:04:14.988 "bdev_error_create", 00:04:14.988 "bdev_split_delete", 00:04:14.988 "bdev_split_create", 00:04:14.988 "bdev_delay_delete", 00:04:14.988 "bdev_delay_create", 00:04:14.988 "bdev_delay_update_latency", 00:04:14.988 "bdev_zone_block_delete", 00:04:14.988 "bdev_zone_block_create", 00:04:14.988 "blobfs_create", 00:04:14.988 "blobfs_detect", 00:04:14.988 "blobfs_set_cache_size", 00:04:14.988 "bdev_aio_delete", 00:04:14.988 "bdev_aio_rescan", 00:04:14.988 "bdev_aio_create", 00:04:14.988 "bdev_ftl_set_property", 00:04:14.988 "bdev_ftl_get_properties", 00:04:14.988 "bdev_ftl_get_stats", 00:04:14.988 "bdev_ftl_unmap", 00:04:14.988 "bdev_ftl_unload", 00:04:14.988 "bdev_ftl_delete", 00:04:14.988 "bdev_ftl_load", 00:04:14.988 "bdev_ftl_create", 00:04:14.988 "bdev_virtio_attach_controller", 00:04:14.988 "bdev_virtio_scsi_get_devices", 00:04:14.988 "bdev_virtio_detach_controller", 00:04:14.988 "bdev_virtio_blk_set_hotplug", 00:04:14.988 "bdev_iscsi_delete", 00:04:14.988 "bdev_iscsi_create", 00:04:14.988 "bdev_iscsi_set_options", 00:04:14.988 "accel_error_inject_error", 00:04:14.988 "ioat_scan_accel_module", 00:04:14.988 "dsa_scan_accel_module", 00:04:14.988 "iaa_scan_accel_module", 00:04:14.988 "vfu_virtio_create_scsi_endpoint", 00:04:14.988 "vfu_virtio_scsi_remove_target", 00:04:14.988 "vfu_virtio_scsi_add_target", 00:04:14.988 "vfu_virtio_create_blk_endpoint", 00:04:14.988 "vfu_virtio_delete_endpoint", 00:04:14.988 "keyring_file_remove_key", 00:04:14.988 "keyring_file_add_key", 00:04:14.988 "keyring_linux_set_options", 00:04:14.988 "iscsi_get_histogram", 00:04:14.988 "iscsi_enable_histogram", 00:04:14.988 "iscsi_set_options", 00:04:14.988 "iscsi_get_auth_groups", 00:04:14.988 "iscsi_auth_group_remove_secret", 00:04:14.988 "iscsi_auth_group_add_secret", 00:04:14.988 "iscsi_delete_auth_group", 00:04:14.988 "iscsi_create_auth_group", 00:04:14.988 "iscsi_set_discovery_auth", 00:04:14.988 "iscsi_get_options", 00:04:14.988 "iscsi_target_node_request_logout", 00:04:14.988 "iscsi_target_node_set_redirect", 00:04:14.988 "iscsi_target_node_set_auth", 00:04:14.988 "iscsi_target_node_add_lun", 00:04:14.988 "iscsi_get_stats", 00:04:14.988 "iscsi_get_connections", 00:04:14.988 "iscsi_portal_group_set_auth", 00:04:14.989 "iscsi_start_portal_group", 00:04:14.989 "iscsi_delete_portal_group", 00:04:14.989 "iscsi_create_portal_group", 00:04:14.989 "iscsi_get_portal_groups", 00:04:14.989 "iscsi_delete_target_node", 00:04:14.989 "iscsi_target_node_remove_pg_ig_maps", 00:04:14.989 "iscsi_target_node_add_pg_ig_maps", 00:04:14.989 "iscsi_create_target_node", 00:04:14.989 "iscsi_get_target_nodes", 00:04:14.989 "iscsi_delete_initiator_group", 00:04:14.989 "iscsi_initiator_group_remove_initiators", 00:04:14.989 "iscsi_initiator_group_add_initiators", 00:04:14.989 "iscsi_create_initiator_group", 00:04:14.989 "iscsi_get_initiator_groups", 00:04:14.989 "nvmf_set_crdt", 00:04:14.989 "nvmf_set_config", 00:04:14.989 "nvmf_set_max_subsystems", 00:04:14.989 "nvmf_stop_mdns_prr", 00:04:14.989 "nvmf_publish_mdns_prr", 00:04:14.989 "nvmf_subsystem_get_listeners", 00:04:14.989 "nvmf_subsystem_get_qpairs", 00:04:14.989 "nvmf_subsystem_get_controllers", 00:04:14.989 "nvmf_get_stats", 00:04:14.989 "nvmf_get_transports", 00:04:14.989 "nvmf_create_transport", 00:04:14.989 "nvmf_get_targets", 00:04:14.989 "nvmf_delete_target", 00:04:14.989 "nvmf_create_target", 00:04:14.989 "nvmf_subsystem_allow_any_host", 00:04:14.989 "nvmf_subsystem_remove_host", 00:04:14.989 "nvmf_subsystem_add_host", 00:04:14.989 "nvmf_ns_remove_host", 00:04:14.989 "nvmf_ns_add_host", 00:04:14.989 "nvmf_subsystem_remove_ns", 00:04:14.989 "nvmf_subsystem_add_ns", 00:04:14.989 "nvmf_subsystem_listener_set_ana_state", 00:04:14.989 "nvmf_discovery_get_referrals", 00:04:14.989 "nvmf_discovery_remove_referral", 00:04:14.989 "nvmf_discovery_add_referral", 00:04:14.989 "nvmf_subsystem_remove_listener", 00:04:14.989 "nvmf_subsystem_add_listener", 00:04:14.989 "nvmf_delete_subsystem", 00:04:14.989 "nvmf_create_subsystem", 00:04:14.989 "nvmf_get_subsystems", 00:04:14.989 "env_dpdk_get_mem_stats", 00:04:14.989 "nbd_get_disks", 00:04:14.989 "nbd_stop_disk", 00:04:14.989 "nbd_start_disk", 00:04:14.989 "ublk_recover_disk", 00:04:14.989 "ublk_get_disks", 00:04:14.989 "ublk_stop_disk", 00:04:14.989 "ublk_start_disk", 00:04:14.989 "ublk_destroy_target", 00:04:14.989 "ublk_create_target", 00:04:14.989 "virtio_blk_create_transport", 00:04:14.989 "virtio_blk_get_transports", 00:04:14.989 "vhost_controller_set_coalescing", 00:04:14.989 "vhost_get_controllers", 00:04:14.989 "vhost_delete_controller", 00:04:14.989 "vhost_create_blk_controller", 00:04:14.989 "vhost_scsi_controller_remove_target", 00:04:14.989 "vhost_scsi_controller_add_target", 00:04:14.989 "vhost_start_scsi_controller", 00:04:14.989 "vhost_create_scsi_controller", 00:04:14.989 "thread_set_cpumask", 00:04:14.989 "framework_get_governor", 00:04:14.989 "framework_get_scheduler", 00:04:14.989 "framework_set_scheduler", 00:04:14.989 "framework_get_reactors", 00:04:14.989 "thread_get_io_channels", 00:04:14.989 "thread_get_pollers", 00:04:14.989 "thread_get_stats", 00:04:14.989 "framework_monitor_context_switch", 00:04:14.989 "spdk_kill_instance", 00:04:14.989 "log_enable_timestamps", 00:04:14.989 "log_get_flags", 00:04:14.989 "log_clear_flag", 00:04:14.989 "log_set_flag", 00:04:14.989 "log_get_level", 00:04:14.989 "log_set_level", 00:04:14.989 "log_get_print_level", 00:04:14.989 "log_set_print_level", 00:04:14.989 "framework_enable_cpumask_locks", 00:04:14.989 "framework_disable_cpumask_locks", 00:04:14.989 "framework_wait_init", 00:04:14.989 "framework_start_init", 00:04:14.989 "scsi_get_devices", 00:04:14.989 "bdev_get_histogram", 00:04:14.989 "bdev_enable_histogram", 00:04:14.989 "bdev_set_qos_limit", 00:04:14.989 "bdev_set_qd_sampling_period", 00:04:14.989 "bdev_get_bdevs", 00:04:14.989 "bdev_reset_iostat", 00:04:14.989 "bdev_get_iostat", 00:04:14.989 "bdev_examine", 00:04:14.989 "bdev_wait_for_examine", 00:04:14.989 "bdev_set_options", 00:04:14.989 "notify_get_notifications", 00:04:14.989 "notify_get_types", 00:04:14.989 "accel_get_stats", 00:04:14.989 "accel_set_options", 00:04:14.989 "accel_set_driver", 00:04:14.989 "accel_crypto_key_destroy", 00:04:14.989 "accel_crypto_keys_get", 00:04:14.989 "accel_crypto_key_create", 00:04:14.989 "accel_assign_opc", 00:04:14.989 "accel_get_module_info", 00:04:14.989 "accel_get_opc_assignments", 00:04:14.989 "vmd_rescan", 00:04:14.989 "vmd_remove_device", 00:04:14.989 "vmd_enable", 00:04:14.989 "sock_get_default_impl", 00:04:14.989 "sock_set_default_impl", 00:04:14.989 "sock_impl_set_options", 00:04:14.989 "sock_impl_get_options", 00:04:14.989 "iobuf_get_stats", 00:04:14.989 "iobuf_set_options", 00:04:14.989 "keyring_get_keys", 00:04:14.989 "framework_get_pci_devices", 00:04:14.989 "framework_get_config", 00:04:14.989 "framework_get_subsystems", 00:04:14.989 "vfu_tgt_set_base_path", 00:04:14.989 "trace_get_info", 00:04:14.989 "trace_get_tpoint_group_mask", 00:04:14.989 "trace_disable_tpoint_group", 00:04:14.989 "trace_enable_tpoint_group", 00:04:14.989 "trace_clear_tpoint_mask", 00:04:14.989 "trace_set_tpoint_mask", 00:04:14.989 "spdk_get_version", 00:04:14.989 "rpc_get_methods" 00:04:14.989 ] 00:04:14.989 19:01:20 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:14.989 19:01:20 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:14.989 19:01:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:14.989 19:01:20 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:14.989 19:01:20 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2468694 00:04:14.989 19:01:20 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2468694 ']' 00:04:14.989 19:01:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2468694 00:04:14.989 19:01:20 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:14.989 19:01:20 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:14.989 19:01:20 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2468694 00:04:14.989 19:01:20 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:14.989 19:01:20 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:14.989 19:01:20 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2468694' 00:04:14.989 killing process with pid 2468694 00:04:14.989 19:01:20 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2468694 00:04:14.989 19:01:20 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2468694 00:04:15.248 00:04:15.248 real 0m1.194s 00:04:15.248 user 0m2.168s 00:04:15.248 sys 0m0.423s 00:04:15.248 19:01:21 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.248 19:01:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:15.248 ************************************ 00:04:15.248 END TEST spdkcli_tcp 00:04:15.248 ************************************ 00:04:15.248 19:01:21 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:15.248 19:01:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.248 19:01:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.248 19:01:21 -- common/autotest_common.sh@10 -- # set +x 00:04:15.248 ************************************ 00:04:15.248 START TEST dpdk_mem_utility 00:04:15.248 ************************************ 00:04:15.248 19:01:21 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:15.506 * Looking for test storage... 00:04:15.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:15.506 19:01:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:15.506 19:01:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2468864 00:04:15.506 19:01:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2468864 00:04:15.506 19:01:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:15.506 19:01:21 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2468864 ']' 00:04:15.506 19:01:21 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.506 19:01:21 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:15.506 19:01:21 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.506 19:01:21 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:15.506 19:01:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:15.506 [2024-07-24 19:01:21.330651] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:15.506 [2024-07-24 19:01:21.330754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2468864 ] 00:04:15.507 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.507 [2024-07-24 19:01:21.392954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.507 [2024-07-24 19:01:21.509805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.764 19:01:21 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:15.764 19:01:21 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:15.764 19:01:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:15.764 19:01:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:15.764 19:01:21 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.764 19:01:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:15.764 { 00:04:15.764 "filename": "/tmp/spdk_mem_dump.txt" 00:04:15.764 } 00:04:15.764 19:01:21 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.764 19:01:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:16.023 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:16.023 1 heaps totaling size 814.000000 MiB 00:04:16.023 size: 814.000000 MiB heap id: 0 00:04:16.023 end heaps---------- 00:04:16.023 8 mempools totaling size 598.116089 MiB 00:04:16.023 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:16.023 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:16.023 size: 84.521057 MiB name: bdev_io_2468864 00:04:16.023 size: 51.011292 MiB name: evtpool_2468864 00:04:16.023 size: 50.003479 MiB name: msgpool_2468864 00:04:16.023 size: 21.763794 MiB name: PDU_Pool 00:04:16.023 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:16.023 size: 0.026123 MiB name: Session_Pool 00:04:16.023 end mempools------- 00:04:16.023 6 memzones totaling size 4.142822 MiB 00:04:16.023 size: 1.000366 MiB name: RG_ring_0_2468864 00:04:16.023 size: 1.000366 MiB name: RG_ring_1_2468864 00:04:16.023 size: 1.000366 MiB name: RG_ring_4_2468864 00:04:16.023 size: 1.000366 MiB name: RG_ring_5_2468864 00:04:16.023 size: 0.125366 MiB name: RG_ring_2_2468864 00:04:16.023 size: 0.015991 MiB name: RG_ring_3_2468864 00:04:16.023 end memzones------- 00:04:16.023 19:01:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:16.023 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:16.023 list of free elements. size: 12.519348 MiB 00:04:16.023 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:16.023 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:16.023 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:16.023 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:16.023 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:16.023 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:16.023 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:16.023 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:16.023 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:16.023 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:16.023 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:16.023 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:16.023 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:16.023 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:16.023 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:16.023 list of standard malloc elements. size: 199.218079 MiB 00:04:16.023 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:16.023 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:16.023 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:16.023 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:16.023 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:16.023 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:16.023 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:16.023 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:16.023 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:16.023 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:16.023 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:16.023 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:16.023 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:16.023 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:16.023 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:16.023 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:16.023 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:16.023 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:16.023 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:16.023 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:16.023 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:16.023 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:16.024 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:16.024 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:16.024 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:16.024 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:16.024 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:16.024 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:16.024 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:16.024 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:16.024 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:16.024 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:16.024 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:16.024 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:16.024 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:16.024 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:16.024 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:16.024 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:16.024 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:16.024 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:16.024 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:16.024 list of memzone associated elements. size: 602.262573 MiB 00:04:16.024 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:16.024 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:16.024 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:16.024 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:16.024 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:16.024 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2468864_0 00:04:16.024 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:16.024 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2468864_0 00:04:16.024 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:16.024 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2468864_0 00:04:16.024 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:16.024 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:16.024 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:16.024 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:16.024 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:16.024 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2468864 00:04:16.024 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:16.024 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2468864 00:04:16.024 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:16.024 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2468864 00:04:16.024 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:16.024 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:16.024 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:16.024 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:16.024 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:16.024 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:16.024 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:16.024 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:16.024 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:16.024 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2468864 00:04:16.024 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:16.024 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2468864 00:04:16.024 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:16.024 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2468864 00:04:16.024 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:16.024 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2468864 00:04:16.024 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:16.024 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2468864 00:04:16.024 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:16.024 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:16.024 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:16.024 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:16.024 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:16.024 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:16.024 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:16.024 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2468864 00:04:16.024 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:16.024 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:16.024 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:16.024 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:16.024 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:16.024 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2468864 00:04:16.024 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:16.024 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:16.024 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:16.024 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2468864 00:04:16.024 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:16.024 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2468864 00:04:16.024 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:16.024 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:16.024 19:01:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:16.024 19:01:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2468864 00:04:16.024 19:01:21 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2468864 ']' 00:04:16.024 19:01:21 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2468864 00:04:16.024 19:01:21 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:16.024 19:01:21 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:16.024 19:01:21 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2468864 00:04:16.024 19:01:21 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:16.024 19:01:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:16.024 19:01:21 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2468864' 00:04:16.024 killing process with pid 2468864 00:04:16.024 19:01:21 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2468864 00:04:16.024 19:01:21 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2468864 00:04:16.284 00:04:16.284 real 0m1.012s 00:04:16.284 user 0m1.053s 00:04:16.284 sys 0m0.386s 00:04:16.284 19:01:22 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.284 19:01:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:16.284 ************************************ 00:04:16.284 END TEST dpdk_mem_utility 00:04:16.284 ************************************ 00:04:16.284 19:01:22 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:16.284 19:01:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.284 19:01:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.284 19:01:22 -- common/autotest_common.sh@10 -- # set +x 00:04:16.284 ************************************ 00:04:16.284 START TEST event 00:04:16.284 ************************************ 00:04:16.284 19:01:22 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:16.542 * Looking for test storage... 00:04:16.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:16.542 19:01:22 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:16.542 19:01:22 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:16.542 19:01:22 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:16.542 19:01:22 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:16.542 19:01:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.542 19:01:22 event -- common/autotest_common.sh@10 -- # set +x 00:04:16.542 ************************************ 00:04:16.542 START TEST event_perf 00:04:16.542 ************************************ 00:04:16.542 19:01:22 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:16.542 Running I/O for 1 seconds...[2024-07-24 19:01:22.382475] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:16.542 [2024-07-24 19:01:22.382557] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469019 ] 00:04:16.542 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.542 [2024-07-24 19:01:22.446570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:16.800 [2024-07-24 19:01:22.569506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:16.800 [2024-07-24 19:01:22.569758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:16.800 [2024-07-24 19:01:22.569818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:16.800 [2024-07-24 19:01:22.569828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.732 Running I/O for 1 seconds... 00:04:17.732 lcore 0: 226585 00:04:17.732 lcore 1: 226585 00:04:17.732 lcore 2: 226585 00:04:17.732 lcore 3: 226585 00:04:17.732 done. 00:04:17.732 00:04:17.732 real 0m1.311s 00:04:17.732 user 0m4.216s 00:04:17.732 sys 0m0.081s 00:04:17.732 19:01:23 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:17.732 19:01:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:17.732 ************************************ 00:04:17.732 END TEST event_perf 00:04:17.732 ************************************ 00:04:17.732 19:01:23 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:17.732 19:01:23 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:17.733 19:01:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:17.733 19:01:23 event -- common/autotest_common.sh@10 -- # set +x 00:04:17.733 ************************************ 00:04:17.733 START TEST event_reactor 00:04:17.733 ************************************ 00:04:17.733 19:01:23 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:17.733 [2024-07-24 19:01:23.745418] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:17.733 [2024-07-24 19:01:23.745503] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469210 ] 00:04:17.991 EAL: No free 2048 kB hugepages reported on node 1 00:04:17.991 [2024-07-24 19:01:23.807530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.991 [2024-07-24 19:01:23.927750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.365 test_start 00:04:19.365 oneshot 00:04:19.365 tick 100 00:04:19.365 tick 100 00:04:19.365 tick 250 00:04:19.365 tick 100 00:04:19.365 tick 100 00:04:19.365 tick 100 00:04:19.365 tick 250 00:04:19.365 tick 500 00:04:19.365 tick 100 00:04:19.365 tick 100 00:04:19.365 tick 250 00:04:19.365 tick 100 00:04:19.365 tick 100 00:04:19.365 test_end 00:04:19.365 00:04:19.365 real 0m1.308s 00:04:19.365 user 0m1.222s 00:04:19.365 sys 0m0.079s 00:04:19.365 19:01:25 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.365 19:01:25 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:19.365 ************************************ 00:04:19.365 END TEST event_reactor 00:04:19.365 ************************************ 00:04:19.365 19:01:25 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:19.365 19:01:25 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:19.365 19:01:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.365 19:01:25 event -- common/autotest_common.sh@10 -- # set +x 00:04:19.365 ************************************ 00:04:19.365 START TEST event_reactor_perf 00:04:19.365 ************************************ 00:04:19.365 19:01:25 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:19.365 [2024-07-24 19:01:25.109590] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:19.365 [2024-07-24 19:01:25.109665] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469361 ] 00:04:19.365 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.365 [2024-07-24 19:01:25.169626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.365 [2024-07-24 19:01:25.289083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.738 test_start 00:04:20.738 test_end 00:04:20.738 Performance: 328281 events per second 00:04:20.738 00:04:20.738 real 0m1.304s 00:04:20.738 user 0m1.221s 00:04:20.738 sys 0m0.076s 00:04:20.738 19:01:26 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.738 19:01:26 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:20.738 ************************************ 00:04:20.739 END TEST event_reactor_perf 00:04:20.739 ************************************ 00:04:20.739 19:01:26 event -- event/event.sh@49 -- # uname -s 00:04:20.739 19:01:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:20.739 19:01:26 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:20.739 19:01:26 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.739 19:01:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.739 19:01:26 event -- common/autotest_common.sh@10 -- # set +x 00:04:20.739 ************************************ 00:04:20.739 START TEST event_scheduler 00:04:20.739 ************************************ 00:04:20.739 19:01:26 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:20.739 * Looking for test storage... 00:04:20.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:20.739 19:01:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:20.739 19:01:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2469509 00:04:20.739 19:01:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.739 19:01:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:20.739 19:01:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2469509 00:04:20.739 19:01:26 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2469509 ']' 00:04:20.739 19:01:26 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.739 19:01:26 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:20.739 19:01:26 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.739 19:01:26 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:20.739 19:01:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:20.739 [2024-07-24 19:01:26.566132] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:20.739 [2024-07-24 19:01:26.566236] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469509 ] 00:04:20.739 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.739 [2024-07-24 19:01:26.630276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:20.739 [2024-07-24 19:01:26.750750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.998 [2024-07-24 19:01:26.754531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.998 [2024-07-24 19:01:26.754557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:20.998 [2024-07-24 19:01:26.754561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:20.998 19:01:26 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:20.998 19:01:26 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:20.998 19:01:26 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:20.998 19:01:26 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.998 19:01:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:20.998 [2024-07-24 19:01:26.819545] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:20.998 [2024-07-24 19:01:26.819579] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:20.998 [2024-07-24 19:01:26.819599] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:20.998 [2024-07-24 19:01:26.819611] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:20.998 [2024-07-24 19:01:26.819623] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:20.998 19:01:26 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.998 19:01:26 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:20.998 19:01:26 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.998 19:01:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:20.998 [2024-07-24 19:01:26.905533] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:20.998 19:01:26 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.998 19:01:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:20.998 19:01:26 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.998 19:01:26 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.998 19:01:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:20.998 ************************************ 00:04:20.998 START TEST scheduler_create_thread 00:04:20.998 ************************************ 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.998 2 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.998 3 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.998 4 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.998 5 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.998 6 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.998 7 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.998 8 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.998 19:01:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.998 9 00:04:20.998 19:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.998 19:01:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:20.998 19:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.998 19:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.998 10 00:04:20.999 19:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.999 19:01:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:21.257 19:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:21.257 19:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.257 19:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:21.257 19:01:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:21.257 19:01:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:21.257 19:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:21.257 19:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.257 19:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:21.257 19:01:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:21.257 19:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:21.257 19:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.257 19:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:21.257 19:01:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:21.257 19:01:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:21.257 19:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:21.257 19:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.515 19:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:21.515 00:04:21.515 real 0m0.592s 00:04:21.515 user 0m0.013s 00:04:21.515 sys 0m0.002s 00:04:21.515 19:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.515 19:01:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.515 ************************************ 00:04:21.515 END TEST scheduler_create_thread 00:04:21.515 ************************************ 00:04:21.774 19:01:27 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:21.774 19:01:27 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2469509 00:04:21.774 19:01:27 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2469509 ']' 00:04:21.774 19:01:27 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2469509 00:04:21.774 19:01:27 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:21.774 19:01:27 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:21.774 19:01:27 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2469509 00:04:21.774 19:01:27 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:21.774 19:01:27 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:21.774 19:01:27 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2469509' 00:04:21.774 killing process with pid 2469509 00:04:21.774 19:01:27 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2469509 00:04:21.774 19:01:27 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2469509 00:04:22.032 [2024-07-24 19:01:28.005983] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:22.290 00:04:22.290 real 0m1.735s 00:04:22.290 user 0m2.197s 00:04:22.290 sys 0m0.322s 00:04:22.290 19:01:28 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.290 19:01:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:22.290 ************************************ 00:04:22.290 END TEST event_scheduler 00:04:22.290 ************************************ 00:04:22.290 19:01:28 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:22.290 19:01:28 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:22.290 19:01:28 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.290 19:01:28 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.290 19:01:28 event -- common/autotest_common.sh@10 -- # set +x 00:04:22.290 ************************************ 00:04:22.290 START TEST app_repeat 00:04:22.290 ************************************ 00:04:22.290 19:01:28 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:22.290 19:01:28 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.290 19:01:28 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.290 19:01:28 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:22.290 19:01:28 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:22.290 19:01:28 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:22.290 19:01:28 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:22.290 19:01:28 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:22.290 19:01:28 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2469756 00:04:22.290 19:01:28 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:22.290 19:01:28 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.290 19:01:28 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2469756' 00:04:22.290 Process app_repeat pid: 2469756 00:04:22.290 19:01:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:22.290 19:01:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:22.290 spdk_app_start Round 0 00:04:22.290 19:01:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2469756 /var/tmp/spdk-nbd.sock 00:04:22.290 19:01:28 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2469756 ']' 00:04:22.290 19:01:28 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:22.290 19:01:28 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:22.290 19:01:28 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:22.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:22.290 19:01:28 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:22.290 19:01:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:22.290 [2024-07-24 19:01:28.275648] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:22.290 [2024-07-24 19:01:28.275722] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469756 ] 00:04:22.290 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.548 [2024-07-24 19:01:28.331082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:22.548 [2024-07-24 19:01:28.434083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:22.548 [2024-07-24 19:01:28.434086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.548 19:01:28 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:22.548 19:01:28 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:22.548 19:01:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.113 Malloc0 00:04:23.113 19:01:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.371 Malloc1 00:04:23.371 19:01:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.371 19:01:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.371 19:01:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.371 19:01:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:23.371 19:01:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.371 19:01:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:23.371 19:01:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.371 19:01:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.371 19:01:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.371 19:01:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:23.371 19:01:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.371 19:01:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:23.371 19:01:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:23.371 19:01:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:23.371 19:01:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.371 19:01:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:23.629 /dev/nbd0 00:04:23.629 19:01:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:23.629 19:01:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:23.629 19:01:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:23.629 19:01:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:23.629 19:01:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:23.629 19:01:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:23.629 19:01:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:23.629 19:01:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:23.629 19:01:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:23.629 19:01:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:23.629 19:01:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.629 1+0 records in 00:04:23.629 1+0 records out 00:04:23.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000165884 s, 24.7 MB/s 00:04:23.629 19:01:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.629 19:01:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:23.629 19:01:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.629 19:01:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:23.629 19:01:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:23.629 19:01:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.629 19:01:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.629 19:01:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:23.887 /dev/nbd1 00:04:23.887 19:01:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:23.887 19:01:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:23.887 19:01:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:23.887 19:01:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:23.887 19:01:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:23.887 19:01:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:23.887 19:01:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:23.887 19:01:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:23.887 19:01:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:23.887 19:01:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:23.887 19:01:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.887 1+0 records in 00:04:23.887 1+0 records out 00:04:23.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233809 s, 17.5 MB/s 00:04:23.887 19:01:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.887 19:01:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:23.887 19:01:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.887 19:01:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:23.887 19:01:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:23.887 19:01:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.887 19:01:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.887 19:01:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:23.887 19:01:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.887 19:01:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.146 19:01:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:24.146 { 00:04:24.146 "nbd_device": "/dev/nbd0", 00:04:24.146 "bdev_name": "Malloc0" 00:04:24.146 }, 00:04:24.146 { 00:04:24.146 "nbd_device": "/dev/nbd1", 00:04:24.146 "bdev_name": "Malloc1" 00:04:24.146 } 00:04:24.146 ]' 00:04:24.146 19:01:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:24.146 { 00:04:24.146 "nbd_device": "/dev/nbd0", 00:04:24.146 "bdev_name": "Malloc0" 00:04:24.146 }, 00:04:24.146 { 00:04:24.146 "nbd_device": "/dev/nbd1", 00:04:24.146 "bdev_name": "Malloc1" 00:04:24.146 } 00:04:24.146 ]' 00:04:24.146 19:01:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.146 19:01:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:24.146 /dev/nbd1' 00:04:24.146 19:01:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:24.146 /dev/nbd1' 00:04:24.146 19:01:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:24.404 256+0 records in 00:04:24.404 256+0 records out 00:04:24.404 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511846 s, 205 MB/s 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:24.404 256+0 records in 00:04:24.404 256+0 records out 00:04:24.404 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262974 s, 39.9 MB/s 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:24.404 256+0 records in 00:04:24.404 256+0 records out 00:04:24.404 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276533 s, 37.9 MB/s 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.404 19:01:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:24.663 19:01:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:24.663 19:01:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:24.663 19:01:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:24.663 19:01:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.663 19:01:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.663 19:01:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:24.663 19:01:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.663 19:01:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.663 19:01:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.663 19:01:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:24.920 19:01:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:24.920 19:01:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:24.920 19:01:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:24.920 19:01:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.920 19:01:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.920 19:01:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:24.920 19:01:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.920 19:01:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.920 19:01:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.920 19:01:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.920 19:01:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.177 19:01:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:25.434 19:01:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:25.434 19:01:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.434 19:01:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:25.434 19:01:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:25.434 19:01:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.434 19:01:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:25.434 19:01:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:25.434 19:01:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:25.434 19:01:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:25.434 19:01:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:25.434 19:01:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:25.434 19:01:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:26.739 19:01:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:26.739 [2024-07-24 19:01:31.744507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.739 [2024-07-24 19:01:31.861852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.739 [2024-07-24 19:01:31.861877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.739 [2024-07-24 19:01:31.912966] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:26.739 [2024-07-24 19:01:31.913046] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:28.634 19:01:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:28.634 19:01:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:28.634 spdk_app_start Round 1 00:04:28.634 19:01:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2469756 /var/tmp/spdk-nbd.sock 00:04:28.634 19:01:34 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2469756 ']' 00:04:28.634 19:01:34 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:28.634 19:01:34 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:28.634 19:01:34 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:28.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:28.634 19:01:34 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:28.634 19:01:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:28.891 19:01:34 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:28.891 19:01:34 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:28.891 19:01:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.149 Malloc0 00:04:29.407 19:01:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.665 Malloc1 00:04:29.665 19:01:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.665 19:01:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.665 19:01:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.665 19:01:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:29.665 19:01:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.665 19:01:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:29.665 19:01:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.665 19:01:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.665 19:01:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.665 19:01:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:29.665 19:01:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.665 19:01:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:29.665 19:01:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:29.665 19:01:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:29.665 19:01:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.665 19:01:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:29.923 /dev/nbd0 00:04:29.923 19:01:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:29.923 19:01:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:29.923 19:01:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:29.923 19:01:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:29.923 19:01:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:29.923 19:01:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:29.923 19:01:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:29.923 19:01:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:29.923 19:01:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:29.923 19:01:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:29.923 19:01:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.923 1+0 records in 00:04:29.923 1+0 records out 00:04:29.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174663 s, 23.5 MB/s 00:04:29.923 19:01:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.923 19:01:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:29.923 19:01:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.923 19:01:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:29.923 19:01:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:29.923 19:01:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.923 19:01:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.923 19:01:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:30.181 /dev/nbd1 00:04:30.181 19:01:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:30.182 19:01:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:30.182 19:01:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:30.182 19:01:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:30.182 19:01:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:30.182 19:01:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:30.182 19:01:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:30.182 19:01:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:30.182 19:01:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:30.182 19:01:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:30.182 19:01:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.182 1+0 records in 00:04:30.182 1+0 records out 00:04:30.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235655 s, 17.4 MB/s 00:04:30.182 19:01:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.182 19:01:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:30.182 19:01:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.182 19:01:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:30.182 19:01:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:30.182 19:01:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.182 19:01:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.182 19:01:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.182 19:01:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.182 19:01:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.439 19:01:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:30.439 { 00:04:30.439 "nbd_device": "/dev/nbd0", 00:04:30.439 "bdev_name": "Malloc0" 00:04:30.439 }, 00:04:30.439 { 00:04:30.439 "nbd_device": "/dev/nbd1", 00:04:30.439 "bdev_name": "Malloc1" 00:04:30.439 } 00:04:30.439 ]' 00:04:30.439 19:01:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:30.439 { 00:04:30.439 "nbd_device": "/dev/nbd0", 00:04:30.439 "bdev_name": "Malloc0" 00:04:30.439 }, 00:04:30.439 { 00:04:30.439 "nbd_device": "/dev/nbd1", 00:04:30.439 "bdev_name": "Malloc1" 00:04:30.439 } 00:04:30.439 ]' 00:04:30.439 19:01:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:30.696 /dev/nbd1' 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:30.696 /dev/nbd1' 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:30.696 256+0 records in 00:04:30.696 256+0 records out 00:04:30.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00599964 s, 175 MB/s 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:30.696 256+0 records in 00:04:30.696 256+0 records out 00:04:30.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255062 s, 41.1 MB/s 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:30.696 256+0 records in 00:04:30.696 256+0 records out 00:04:30.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277717 s, 37.8 MB/s 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.696 19:01:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:30.954 19:01:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:30.954 19:01:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:30.954 19:01:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:30.954 19:01:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.954 19:01:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.954 19:01:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:30.954 19:01:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:30.954 19:01:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.954 19:01:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.954 19:01:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:31.213 19:01:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:31.213 19:01:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:31.213 19:01:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:31.213 19:01:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.213 19:01:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.213 19:01:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:31.213 19:01:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:31.213 19:01:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.213 19:01:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.213 19:01:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.213 19:01:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.778 19:01:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:31.778 19:01:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:31.778 19:01:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.778 19:01:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:31.778 19:01:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:31.778 19:01:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.778 19:01:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:31.778 19:01:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:31.778 19:01:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:31.778 19:01:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:31.778 19:01:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:31.778 19:01:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:31.778 19:01:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:32.036 19:01:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:32.293 [2024-07-24 19:01:38.095642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:32.293 [2024-07-24 19:01:38.214873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.293 [2024-07-24 19:01:38.214903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.293 [2024-07-24 19:01:38.266582] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:32.293 [2024-07-24 19:01:38.266665] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:35.651 19:01:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:35.651 19:01:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:35.651 spdk_app_start Round 2 00:04:35.651 19:01:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2469756 /var/tmp/spdk-nbd.sock 00:04:35.651 19:01:40 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2469756 ']' 00:04:35.651 19:01:40 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:35.651 19:01:40 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:35.651 19:01:40 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:35.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:35.652 19:01:40 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:35.652 19:01:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:35.652 19:01:41 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.652 19:01:41 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:35.652 19:01:41 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:35.652 Malloc0 00:04:35.652 19:01:41 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:35.910 Malloc1 00:04:35.910 19:01:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:35.910 19:01:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.910 19:01:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:35.910 19:01:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:35.910 19:01:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.910 19:01:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:35.910 19:01:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:35.910 19:01:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.910 19:01:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:35.910 19:01:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:35.910 19:01:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.910 19:01:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:35.910 19:01:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:35.910 19:01:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:35.910 19:01:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.910 19:01:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:36.168 /dev/nbd0 00:04:36.168 19:01:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:36.168 19:01:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:36.168 19:01:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:36.168 19:01:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:36.168 19:01:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:36.168 19:01:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:36.168 19:01:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:36.168 19:01:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:36.168 19:01:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:36.168 19:01:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:36.168 19:01:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:36.168 1+0 records in 00:04:36.168 1+0 records out 00:04:36.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000153282 s, 26.7 MB/s 00:04:36.168 19:01:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.168 19:01:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:36.168 19:01:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.168 19:01:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:36.168 19:01:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:36.168 19:01:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:36.168 19:01:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.168 19:01:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:36.736 /dev/nbd1 00:04:36.736 19:01:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:36.736 19:01:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:36.736 19:01:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:36.736 19:01:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:36.736 19:01:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:36.736 19:01:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:36.736 19:01:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:36.736 19:01:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:36.736 19:01:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:36.736 19:01:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:36.736 19:01:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:36.736 1+0 records in 00:04:36.736 1+0 records out 00:04:36.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231444 s, 17.7 MB/s 00:04:36.736 19:01:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.736 19:01:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:36.736 19:01:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.736 19:01:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:36.736 19:01:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:36.736 19:01:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:36.736 19:01:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.736 19:01:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:36.736 19:01:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.736 19:01:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:36.994 19:01:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:36.994 { 00:04:36.994 "nbd_device": "/dev/nbd0", 00:04:36.994 "bdev_name": "Malloc0" 00:04:36.994 }, 00:04:36.994 { 00:04:36.994 "nbd_device": "/dev/nbd1", 00:04:36.994 "bdev_name": "Malloc1" 00:04:36.994 } 00:04:36.994 ]' 00:04:36.994 19:01:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:36.994 { 00:04:36.994 "nbd_device": "/dev/nbd0", 00:04:36.994 "bdev_name": "Malloc0" 00:04:36.994 }, 00:04:36.994 { 00:04:36.994 "nbd_device": "/dev/nbd1", 00:04:36.994 "bdev_name": "Malloc1" 00:04:36.994 } 00:04:36.994 ]' 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:36.995 /dev/nbd1' 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:36.995 /dev/nbd1' 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:36.995 256+0 records in 00:04:36.995 256+0 records out 00:04:36.995 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00593049 s, 177 MB/s 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:36.995 256+0 records in 00:04:36.995 256+0 records out 00:04:36.995 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255502 s, 41.0 MB/s 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:36.995 256+0 records in 00:04:36.995 256+0 records out 00:04:36.995 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267599 s, 39.2 MB/s 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:36.995 19:01:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:37.253 19:01:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:37.253 19:01:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:37.253 19:01:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:37.253 19:01:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:37.253 19:01:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:37.253 19:01:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:37.253 19:01:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:37.253 19:01:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:37.253 19:01:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:37.253 19:01:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:37.836 19:01:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:37.836 19:01:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:37.836 19:01:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:37.836 19:01:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:37.836 19:01:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:37.836 19:01:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:37.836 19:01:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:37.836 19:01:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:37.836 19:01:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:37.836 19:01:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.836 19:01:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:38.094 19:01:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:38.094 19:01:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:38.094 19:01:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:38.094 19:01:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:38.094 19:01:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:38.094 19:01:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:38.094 19:01:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:38.094 19:01:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:38.094 19:01:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:38.094 19:01:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:38.094 19:01:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:38.094 19:01:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:38.094 19:01:43 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:38.351 19:01:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:38.608 [2024-07-24 19:01:44.437574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.609 [2024-07-24 19:01:44.557229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.609 [2024-07-24 19:01:44.557229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.609 [2024-07-24 19:01:44.606699] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:38.609 [2024-07-24 19:01:44.606777] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:41.901 19:01:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2469756 /var/tmp/spdk-nbd.sock 00:04:41.901 19:01:47 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2469756 ']' 00:04:41.901 19:01:47 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:41.901 19:01:47 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:41.902 19:01:47 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:41.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:41.902 19:01:47 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:41.902 19:01:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:41.902 19:01:47 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:41.902 19:01:47 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:41.902 19:01:47 event.app_repeat -- event/event.sh@39 -- # killprocess 2469756 00:04:41.902 19:01:47 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2469756 ']' 00:04:41.902 19:01:47 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2469756 00:04:41.902 19:01:47 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:04:41.902 19:01:47 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:41.902 19:01:47 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2469756 00:04:41.902 19:01:47 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:41.902 19:01:47 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:41.902 19:01:47 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2469756' 00:04:41.902 killing process with pid 2469756 00:04:41.902 19:01:47 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2469756 00:04:41.902 19:01:47 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2469756 00:04:41.902 spdk_app_start is called in Round 0. 00:04:41.902 Shutdown signal received, stop current app iteration 00:04:41.902 Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 reinitialization... 00:04:41.902 spdk_app_start is called in Round 1. 00:04:41.902 Shutdown signal received, stop current app iteration 00:04:41.902 Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 reinitialization... 00:04:41.902 spdk_app_start is called in Round 2. 00:04:41.902 Shutdown signal received, stop current app iteration 00:04:41.902 Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 reinitialization... 00:04:41.902 spdk_app_start is called in Round 3. 00:04:41.902 Shutdown signal received, stop current app iteration 00:04:41.902 19:01:47 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:41.902 19:01:47 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:41.902 00:04:41.902 real 0m19.524s 00:04:41.902 user 0m43.338s 00:04:41.902 sys 0m3.467s 00:04:41.902 19:01:47 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.902 19:01:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:41.902 ************************************ 00:04:41.902 END TEST app_repeat 00:04:41.902 ************************************ 00:04:41.902 19:01:47 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:41.902 19:01:47 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:41.902 19:01:47 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.902 19:01:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.902 19:01:47 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.902 ************************************ 00:04:41.902 START TEST cpu_locks 00:04:41.902 ************************************ 00:04:41.902 19:01:47 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:41.902 * Looking for test storage... 00:04:41.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:41.902 19:01:47 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:41.902 19:01:47 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:41.902 19:01:47 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:41.902 19:01:47 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:41.902 19:01:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.902 19:01:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.902 19:01:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.902 ************************************ 00:04:41.902 START TEST default_locks 00:04:41.902 ************************************ 00:04:41.902 19:01:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:04:41.902 19:01:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2471772 00:04:41.902 19:01:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2471772 00:04:41.902 19:01:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.902 19:01:47 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2471772 ']' 00:04:41.902 19:01:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.902 19:01:47 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:41.902 19:01:47 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.902 19:01:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:41.902 19:01:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.160 [2024-07-24 19:01:47.966924] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:42.160 [2024-07-24 19:01:47.967027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471772 ] 00:04:42.160 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.160 [2024-07-24 19:01:48.028184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.160 [2024-07-24 19:01:48.145283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.418 19:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:42.418 19:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:04:42.418 19:01:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2471772 00:04:42.418 19:01:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2471772 00:04:42.418 19:01:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:42.985 lslocks: write error 00:04:42.985 19:01:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2471772 00:04:42.985 19:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2471772 ']' 00:04:42.985 19:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2471772 00:04:42.985 19:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:04:42.985 19:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:42.985 19:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2471772 00:04:42.985 19:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:42.985 19:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:42.985 19:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2471772' 00:04:42.985 killing process with pid 2471772 00:04:42.985 19:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2471772 00:04:42.985 19:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2471772 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2471772 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2471772 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2471772 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2471772 ']' 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2471772) - No such process 00:04:43.246 ERROR: process (pid: 2471772) is no longer running 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:43.246 00:04:43.246 real 0m1.195s 00:04:43.246 user 0m1.177s 00:04:43.246 sys 0m0.549s 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.246 19:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.246 ************************************ 00:04:43.246 END TEST default_locks 00:04:43.246 ************************************ 00:04:43.246 19:01:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:43.246 19:01:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.246 19:01:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.246 19:01:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.246 ************************************ 00:04:43.246 START TEST default_locks_via_rpc 00:04:43.246 ************************************ 00:04:43.246 19:01:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:04:43.246 19:01:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2471923 00:04:43.246 19:01:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:43.246 19:01:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2471923 00:04:43.246 19:01:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2471923 ']' 00:04:43.246 19:01:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.246 19:01:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:43.246 19:01:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.246 19:01:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:43.246 19:01:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.246 [2024-07-24 19:01:49.218702] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:43.246 [2024-07-24 19:01:49.218797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471923 ] 00:04:43.246 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.507 [2024-07-24 19:01:49.282126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.507 [2024-07-24 19:01:49.401195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.766 19:01:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:43.766 19:01:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:43.766 19:01:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:43.766 19:01:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.766 19:01:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.766 19:01:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.766 19:01:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:43.766 19:01:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:43.766 19:01:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:43.766 19:01:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:43.766 19:01:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:43.766 19:01:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.766 19:01:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.766 19:01:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.766 19:01:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2471923 00:04:43.766 19:01:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2471923 00:04:43.766 19:01:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:44.025 19:01:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2471923 00:04:44.025 19:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2471923 ']' 00:04:44.025 19:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2471923 00:04:44.025 19:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:04:44.025 19:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:44.025 19:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2471923 00:04:44.285 19:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:44.285 19:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:44.285 19:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2471923' 00:04:44.285 killing process with pid 2471923 00:04:44.285 19:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2471923 00:04:44.285 19:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2471923 00:04:44.544 00:04:44.544 real 0m1.212s 00:04:44.544 user 0m1.217s 00:04:44.544 sys 0m0.537s 00:04:44.544 19:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.544 19:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.544 ************************************ 00:04:44.544 END TEST default_locks_via_rpc 00:04:44.544 ************************************ 00:04:44.544 19:01:50 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:44.544 19:01:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.544 19:01:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.544 19:01:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.544 ************************************ 00:04:44.544 START TEST non_locking_app_on_locked_coremask 00:04:44.544 ************************************ 00:04:44.544 19:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:04:44.544 19:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2472053 00:04:44.544 19:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.544 19:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2472053 /var/tmp/spdk.sock 00:04:44.544 19:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2472053 ']' 00:04:44.544 19:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.544 19:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:44.544 19:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.544 19:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:44.544 19:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.544 [2024-07-24 19:01:50.488198] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:44.544 [2024-07-24 19:01:50.488301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472053 ] 00:04:44.544 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.544 [2024-07-24 19:01:50.550849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.804 [2024-07-24 19:01:50.671841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.062 19:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:45.062 19:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:45.062 19:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2472067 00:04:45.062 19:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:45.062 19:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2472067 /var/tmp/spdk2.sock 00:04:45.062 19:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2472067 ']' 00:04:45.062 19:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:45.062 19:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:45.062 19:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:45.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:45.062 19:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:45.062 19:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.062 [2024-07-24 19:01:50.957734] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:45.062 [2024-07-24 19:01:50.957830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472067 ] 00:04:45.062 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.062 [2024-07-24 19:01:51.049408] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:45.062 [2024-07-24 19:01:51.049458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.320 [2024-07-24 19:01:51.289021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.255 19:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:46.255 19:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:46.255 19:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2472053 00:04:46.255 19:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2472053 00:04:46.255 19:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:46.822 lslocks: write error 00:04:46.822 19:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2472053 00:04:46.822 19:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2472053 ']' 00:04:46.822 19:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2472053 00:04:46.822 19:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:46.822 19:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:46.822 19:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2472053 00:04:46.822 19:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:46.822 19:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:46.822 19:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2472053' 00:04:46.822 killing process with pid 2472053 00:04:46.822 19:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2472053 00:04:46.822 19:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2472053 00:04:47.391 19:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2472067 00:04:47.391 19:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2472067 ']' 00:04:47.391 19:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2472067 00:04:47.391 19:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:47.392 19:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:47.392 19:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2472067 00:04:47.392 19:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:47.392 19:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:47.392 19:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2472067' 00:04:47.392 killing process with pid 2472067 00:04:47.392 19:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2472067 00:04:47.392 19:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2472067 00:04:47.651 00:04:47.651 real 0m3.173s 00:04:47.651 user 0m3.551s 00:04:47.651 sys 0m1.038s 00:04:47.651 19:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.651 19:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.651 ************************************ 00:04:47.651 END TEST non_locking_app_on_locked_coremask 00:04:47.651 ************************************ 00:04:47.651 19:01:53 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:47.651 19:01:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.651 19:01:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.651 19:01:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.651 ************************************ 00:04:47.651 START TEST locking_app_on_unlocked_coremask 00:04:47.651 ************************************ 00:04:47.651 19:01:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:04:47.651 19:01:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2472391 00:04:47.651 19:01:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:47.651 19:01:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2472391 /var/tmp/spdk.sock 00:04:47.651 19:01:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2472391 ']' 00:04:47.651 19:01:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.651 19:01:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:47.651 19:01:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.651 19:01:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:47.651 19:01:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.911 [2024-07-24 19:01:53.718029] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:47.911 [2024-07-24 19:01:53.718123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472391 ] 00:04:47.911 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.911 [2024-07-24 19:01:53.780954] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:47.911 [2024-07-24 19:01:53.781008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.911 [2024-07-24 19:01:53.901171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.170 19:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.170 19:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:48.170 19:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2472405 00:04:48.170 19:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:48.170 19:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2472405 /var/tmp/spdk2.sock 00:04:48.170 19:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2472405 ']' 00:04:48.170 19:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:48.170 19:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.170 19:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:48.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:48.170 19:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.170 19:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.170 [2024-07-24 19:01:54.182911] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:48.170 [2024-07-24 19:01:54.183008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472405 ] 00:04:48.430 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.430 [2024-07-24 19:01:54.273994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.690 [2024-07-24 19:01:54.514382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.258 19:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:49.258 19:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:49.258 19:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2472405 00:04:49.258 19:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2472405 00:04:49.258 19:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:50.192 lslocks: write error 00:04:50.192 19:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2472391 00:04:50.192 19:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2472391 ']' 00:04:50.192 19:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2472391 00:04:50.192 19:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:50.192 19:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.192 19:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2472391 00:04:50.192 19:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.192 19:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.192 19:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2472391' 00:04:50.192 killing process with pid 2472391 00:04:50.192 19:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2472391 00:04:50.192 19:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2472391 00:04:50.764 19:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2472405 00:04:50.764 19:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2472405 ']' 00:04:50.764 19:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2472405 00:04:50.764 19:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:50.764 19:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.764 19:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2472405 00:04:50.764 19:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.764 19:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.764 19:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2472405' 00:04:50.764 killing process with pid 2472405 00:04:50.764 19:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2472405 00:04:50.764 19:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2472405 00:04:51.023 00:04:51.023 real 0m3.367s 00:04:51.023 user 0m3.740s 00:04:51.023 sys 0m1.039s 00:04:51.023 19:01:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.023 19:01:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.023 ************************************ 00:04:51.023 END TEST locking_app_on_unlocked_coremask 00:04:51.023 ************************************ 00:04:51.284 19:01:57 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:51.284 19:01:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.284 19:01:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.284 19:01:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.284 ************************************ 00:04:51.284 START TEST locking_app_on_locked_coremask 00:04:51.284 ************************************ 00:04:51.284 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:04:51.284 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2472736 00:04:51.284 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.284 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2472736 /var/tmp/spdk.sock 00:04:51.284 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2472736 ']' 00:04:51.284 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.284 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:51.284 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.284 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:51.284 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.284 [2024-07-24 19:01:57.141438] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:51.284 [2024-07-24 19:01:57.141555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472736 ] 00:04:51.284 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.284 [2024-07-24 19:01:57.201730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.543 [2024-07-24 19:01:57.322047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.543 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:51.543 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:51.543 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2472743 00:04:51.543 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2472743 /var/tmp/spdk2.sock 00:04:51.543 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:51.543 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:51.543 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2472743 /var/tmp/spdk2.sock 00:04:51.543 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:51.543 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:51.543 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:51.543 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:51.543 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2472743 /var/tmp/spdk2.sock 00:04:51.543 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2472743 ']' 00:04:51.543 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:51.543 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:51.543 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:51.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:51.543 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:51.543 19:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.801 [2024-07-24 19:01:57.605238] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:51.801 [2024-07-24 19:01:57.605330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472743 ] 00:04:51.801 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.801 [2024-07-24 19:01:57.696032] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2472736 has claimed it. 00:04:51.801 [2024-07-24 19:01:57.696094] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:52.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2472743) - No such process 00:04:52.370 ERROR: process (pid: 2472743) is no longer running 00:04:52.370 19:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.370 19:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:52.370 19:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:52.370 19:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:52.371 19:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:52.371 19:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:52.371 19:01:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2472736 00:04:52.371 19:01:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2472736 00:04:52.371 19:01:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.941 lslocks: write error 00:04:52.941 19:01:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2472736 00:04:52.941 19:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2472736 ']' 00:04:52.941 19:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2472736 00:04:52.941 19:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:52.941 19:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:52.941 19:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2472736 00:04:52.941 19:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:52.941 19:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:52.941 19:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2472736' 00:04:52.941 killing process with pid 2472736 00:04:52.941 19:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2472736 00:04:52.941 19:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2472736 00:04:53.200 00:04:53.200 real 0m2.069s 00:04:53.200 user 0m2.349s 00:04:53.200 sys 0m0.637s 00:04:53.200 19:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.200 19:01:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.200 ************************************ 00:04:53.200 END TEST locking_app_on_locked_coremask 00:04:53.200 ************************************ 00:04:53.200 19:01:59 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:53.200 19:01:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.200 19:01:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.200 19:01:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.200 ************************************ 00:04:53.200 START TEST locking_overlapped_coremask 00:04:53.200 ************************************ 00:04:53.200 19:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:04:53.200 19:01:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2472970 00:04:53.200 19:01:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:53.200 19:01:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2472970 /var/tmp/spdk.sock 00:04:53.200 19:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2472970 ']' 00:04:53.200 19:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.200 19:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:53.200 19:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.200 19:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:53.200 19:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.460 [2024-07-24 19:01:59.266937] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:53.460 [2024-07-24 19:01:59.267026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472970 ] 00:04:53.460 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.460 [2024-07-24 19:01:59.331455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:53.460 [2024-07-24 19:01:59.454508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.460 [2024-07-24 19:01:59.454588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:53.460 [2024-07-24 19:01:59.454624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.719 19:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:53.719 19:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:53.719 19:01:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2472975 00:04:53.719 19:01:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:53.719 19:01:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2472975 /var/tmp/spdk2.sock 00:04:53.720 19:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:53.720 19:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2472975 /var/tmp/spdk2.sock 00:04:53.720 19:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:53.720 19:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.720 19:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:53.720 19:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.720 19:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2472975 /var/tmp/spdk2.sock 00:04:53.720 19:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2472975 ']' 00:04:53.720 19:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:53.720 19:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:53.720 19:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:53.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:53.720 19:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:53.720 19:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.978 [2024-07-24 19:01:59.748928] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:53.978 [2024-07-24 19:01:59.749026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472975 ] 00:04:53.978 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.978 [2024-07-24 19:01:59.839294] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2472970 has claimed it. 00:04:53.978 [2024-07-24 19:01:59.839365] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:54.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2472975) - No such process 00:04:54.547 ERROR: process (pid: 2472975) is no longer running 00:04:54.547 19:02:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:54.547 19:02:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:54.547 19:02:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:54.547 19:02:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:54.547 19:02:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:54.547 19:02:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:54.547 19:02:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:54.547 19:02:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:54.547 19:02:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:54.548 19:02:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:54.548 19:02:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2472970 00:04:54.548 19:02:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2472970 ']' 00:04:54.548 19:02:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2472970 00:04:54.548 19:02:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:04:54.548 19:02:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:54.548 19:02:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2472970 00:04:54.548 19:02:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:54.548 19:02:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:54.548 19:02:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2472970' 00:04:54.548 killing process with pid 2472970 00:04:54.548 19:02:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2472970 00:04:54.548 19:02:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2472970 00:04:55.116 00:04:55.116 real 0m1.640s 00:04:55.116 user 0m4.409s 00:04:55.116 sys 0m0.433s 00:04:55.116 19:02:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.116 19:02:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.116 ************************************ 00:04:55.116 END TEST locking_overlapped_coremask 00:04:55.116 ************************************ 00:04:55.116 19:02:00 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:55.116 19:02:00 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.116 19:02:00 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.116 19:02:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.116 ************************************ 00:04:55.116 START TEST locking_overlapped_coremask_via_rpc 00:04:55.116 ************************************ 00:04:55.116 19:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:04:55.116 19:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2473105 00:04:55.116 19:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:55.116 19:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2473105 /var/tmp/spdk.sock 00:04:55.116 19:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2473105 ']' 00:04:55.116 19:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.116 19:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:55.116 19:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.116 19:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:55.116 19:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.116 [2024-07-24 19:02:00.966161] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:55.116 [2024-07-24 19:02:00.966267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473105 ] 00:04:55.116 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.116 [2024-07-24 19:02:01.026141] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:55.116 [2024-07-24 19:02:01.026179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:55.376 [2024-07-24 19:02:01.145001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.376 [2024-07-24 19:02:01.145100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.376 [2024-07-24 19:02:01.145104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.376 19:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:55.376 19:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:55.376 19:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2473209 00:04:55.376 19:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2473209 /var/tmp/spdk2.sock 00:04:55.376 19:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2473209 ']' 00:04:55.376 19:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.376 19:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:55.376 19:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.376 19:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:55.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.376 19:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:55.376 19:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.635 [2024-07-24 19:02:01.443240] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:55.635 [2024-07-24 19:02:01.443339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473209 ] 00:04:55.635 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.635 [2024-07-24 19:02:01.532904] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:55.635 [2024-07-24 19:02:01.532947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:55.895 [2024-07-24 19:02:01.772737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:55.895 [2024-07-24 19:02:01.772790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:04:55.895 [2024-07-24 19:02:01.772792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.464 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.464 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:56.464 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:56.464 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.464 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.723 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.723 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:56.723 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:56.723 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:56.723 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:56.723 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.723 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:56.723 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.723 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:56.723 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.723 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.723 [2024-07-24 19:02:02.490602] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2473105 has claimed it. 00:04:56.723 request: 00:04:56.723 { 00:04:56.723 "method": "framework_enable_cpumask_locks", 00:04:56.723 "req_id": 1 00:04:56.723 } 00:04:56.723 Got JSON-RPC error response 00:04:56.723 response: 00:04:56.723 { 00:04:56.723 "code": -32603, 00:04:56.723 "message": "Failed to claim CPU core: 2" 00:04:56.723 } 00:04:56.723 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:56.723 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:56.723 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:56.723 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:56.723 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:56.723 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2473105 /var/tmp/spdk.sock 00:04:56.724 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2473105 ']' 00:04:56.724 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.724 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:56.724 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.724 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:56.724 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.982 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.982 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:56.982 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2473209 /var/tmp/spdk2.sock 00:04:56.982 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2473209 ']' 00:04:56.982 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:56.982 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:56.982 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:56.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:56.982 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:56.982 19:02:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.240 19:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:57.241 19:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:57.241 19:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:57.241 19:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:57.241 19:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:57.241 19:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:57.241 00:04:57.241 real 0m2.197s 00:04:57.241 user 0m1.276s 00:04:57.241 sys 0m0.184s 00:04:57.241 19:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.241 19:02:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.241 ************************************ 00:04:57.241 END TEST locking_overlapped_coremask_via_rpc 00:04:57.241 ************************************ 00:04:57.241 19:02:03 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:57.241 19:02:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2473105 ]] 00:04:57.241 19:02:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2473105 00:04:57.241 19:02:03 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2473105 ']' 00:04:57.241 19:02:03 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2473105 00:04:57.241 19:02:03 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:04:57.241 19:02:03 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:57.241 19:02:03 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2473105 00:04:57.241 19:02:03 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:57.241 19:02:03 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:57.241 19:02:03 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2473105' 00:04:57.241 killing process with pid 2473105 00:04:57.241 19:02:03 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2473105 00:04:57.241 19:02:03 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2473105 00:04:57.500 19:02:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2473209 ]] 00:04:57.500 19:02:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2473209 00:04:57.500 19:02:03 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2473209 ']' 00:04:57.500 19:02:03 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2473209 00:04:57.500 19:02:03 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:04:57.500 19:02:03 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:57.500 19:02:03 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2473209 00:04:57.500 19:02:03 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:57.501 19:02:03 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:57.501 19:02:03 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2473209' 00:04:57.501 killing process with pid 2473209 00:04:57.501 19:02:03 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2473209 00:04:57.501 19:02:03 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2473209 00:04:58.079 19:02:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:58.079 19:02:03 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:58.079 19:02:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2473105 ]] 00:04:58.080 19:02:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2473105 00:04:58.080 19:02:03 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2473105 ']' 00:04:58.080 19:02:03 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2473105 00:04:58.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2473105) - No such process 00:04:58.080 19:02:03 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2473105 is not found' 00:04:58.080 Process with pid 2473105 is not found 00:04:58.080 19:02:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2473209 ]] 00:04:58.080 19:02:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2473209 00:04:58.080 19:02:03 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2473209 ']' 00:04:58.080 19:02:03 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2473209 00:04:58.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2473209) - No such process 00:04:58.080 19:02:03 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2473209 is not found' 00:04:58.080 Process with pid 2473209 is not found 00:04:58.080 19:02:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:58.080 00:04:58.080 real 0m16.029s 00:04:58.080 user 0m28.783s 00:04:58.080 sys 0m5.279s 00:04:58.080 19:02:03 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.080 19:02:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.080 ************************************ 00:04:58.080 END TEST cpu_locks 00:04:58.080 ************************************ 00:04:58.080 00:04:58.080 real 0m41.600s 00:04:58.080 user 1m21.131s 00:04:58.080 sys 0m9.560s 00:04:58.080 19:02:03 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.080 19:02:03 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.080 ************************************ 00:04:58.080 END TEST event 00:04:58.080 ************************************ 00:04:58.080 19:02:03 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:58.080 19:02:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.080 19:02:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.080 19:02:03 -- common/autotest_common.sh@10 -- # set +x 00:04:58.080 ************************************ 00:04:58.080 START TEST thread 00:04:58.080 ************************************ 00:04:58.080 19:02:03 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:58.080 * Looking for test storage... 00:04:58.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:58.080 19:02:03 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:58.080 19:02:03 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:04:58.080 19:02:03 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.080 19:02:03 thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.080 ************************************ 00:04:58.080 START TEST thread_poller_perf 00:04:58.080 ************************************ 00:04:58.080 19:02:04 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:58.080 [2024-07-24 19:02:04.033640] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:58.080 [2024-07-24 19:02:04.033720] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473514 ] 00:04:58.080 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.400 [2024-07-24 19:02:04.093519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.400 [2024-07-24 19:02:04.210244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.400 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:59.342 ====================================== 00:04:59.342 busy:2716695280 (cyc) 00:04:59.342 total_run_count: 261000 00:04:59.342 tsc_hz: 2700000000 (cyc) 00:04:59.342 ====================================== 00:04:59.342 poller_cost: 10408 (cyc), 3854 (nsec) 00:04:59.342 00:04:59.342 real 0m1.312s 00:04:59.342 user 0m1.234s 00:04:59.342 sys 0m0.070s 00:04:59.342 19:02:05 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.342 19:02:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:59.342 ************************************ 00:04:59.342 END TEST thread_poller_perf 00:04:59.342 ************************************ 00:04:59.602 19:02:05 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:59.602 19:02:05 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:04:59.603 19:02:05 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.603 19:02:05 thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.603 ************************************ 00:04:59.603 START TEST thread_poller_perf 00:04:59.603 ************************************ 00:04:59.603 19:02:05 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:59.603 [2024-07-24 19:02:05.397568] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:04:59.603 [2024-07-24 19:02:05.397640] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473644 ] 00:04:59.603 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.603 [2024-07-24 19:02:05.457988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.603 [2024-07-24 19:02:05.579948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.603 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:00.983 ====================================== 00:05:00.983 busy:2703072264 (cyc) 00:05:00.983 total_run_count: 3670000 00:05:00.983 tsc_hz: 2700000000 (cyc) 00:05:00.983 ====================================== 00:05:00.983 poller_cost: 736 (cyc), 272 (nsec) 00:05:00.983 00:05:00.983 real 0m1.308s 00:05:00.983 user 0m1.223s 00:05:00.983 sys 0m0.077s 00:05:00.983 19:02:06 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.983 19:02:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:00.983 ************************************ 00:05:00.983 END TEST thread_poller_perf 00:05:00.983 ************************************ 00:05:00.983 19:02:06 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:00.983 00:05:00.983 real 0m2.784s 00:05:00.983 user 0m2.516s 00:05:00.983 sys 0m0.261s 00:05:00.983 19:02:06 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.983 19:02:06 thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.983 ************************************ 00:05:00.983 END TEST thread 00:05:00.983 ************************************ 00:05:00.983 19:02:06 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:05:00.983 19:02:06 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:00.984 19:02:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.984 19:02:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.984 19:02:06 -- common/autotest_common.sh@10 -- # set +x 00:05:00.984 ************************************ 00:05:00.984 START TEST app_cmdline 00:05:00.984 ************************************ 00:05:00.984 19:02:06 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:00.984 * Looking for test storage... 00:05:00.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:00.984 19:02:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:00.984 19:02:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2473889 00:05:00.984 19:02:06 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:00.984 19:02:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2473889 00:05:00.984 19:02:06 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2473889 ']' 00:05:00.984 19:02:06 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.984 19:02:06 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.984 19:02:06 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.984 19:02:06 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.984 19:02:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:00.984 [2024-07-24 19:02:06.885128] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:05:00.984 [2024-07-24 19:02:06.885230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473889 ] 00:05:00.984 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.984 [2024-07-24 19:02:06.945232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.244 [2024-07-24 19:02:07.062613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.503 19:02:07 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:01.503 19:02:07 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:01.503 19:02:07 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:01.763 { 00:05:01.763 "version": "SPDK v24.09-pre git sha1 ee633e585", 00:05:01.763 "fields": { 00:05:01.763 "major": 24, 00:05:01.763 "minor": 9, 00:05:01.763 "patch": 0, 00:05:01.763 "suffix": "-pre", 00:05:01.763 "commit": "ee633e585" 00:05:01.763 } 00:05:01.763 } 00:05:01.763 19:02:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:01.763 19:02:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:01.763 19:02:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:01.763 19:02:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:01.763 19:02:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:01.763 19:02:07 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.763 19:02:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:01.763 19:02:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:01.763 19:02:07 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:01.763 19:02:07 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.763 19:02:07 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:01.763 19:02:07 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:01.763 19:02:07 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:01.763 19:02:07 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:01.763 19:02:07 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:01.763 19:02:07 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:01.763 19:02:07 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:01.763 19:02:07 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:01.763 19:02:07 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:01.763 19:02:07 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:01.763 19:02:07 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:01.763 19:02:07 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:01.763 19:02:07 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:01.763 19:02:07 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:02.024 request: 00:05:02.024 { 00:05:02.024 "method": "env_dpdk_get_mem_stats", 00:05:02.024 "req_id": 1 00:05:02.024 } 00:05:02.024 Got JSON-RPC error response 00:05:02.024 response: 00:05:02.024 { 00:05:02.024 "code": -32601, 00:05:02.024 "message": "Method not found" 00:05:02.024 } 00:05:02.024 19:02:07 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:02.024 19:02:07 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:02.024 19:02:07 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:02.024 19:02:07 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:02.024 19:02:07 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2473889 00:05:02.024 19:02:07 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2473889 ']' 00:05:02.024 19:02:07 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2473889 00:05:02.024 19:02:07 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:02.024 19:02:07 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:02.024 19:02:07 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2473889 00:05:02.024 19:02:07 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:02.024 19:02:07 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:02.024 19:02:07 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2473889' 00:05:02.024 killing process with pid 2473889 00:05:02.024 19:02:07 app_cmdline -- common/autotest_common.sh@969 -- # kill 2473889 00:05:02.024 19:02:07 app_cmdline -- common/autotest_common.sh@974 -- # wait 2473889 00:05:02.284 00:05:02.284 real 0m1.498s 00:05:02.284 user 0m1.974s 00:05:02.284 sys 0m0.434s 00:05:02.284 19:02:08 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.284 19:02:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:02.284 ************************************ 00:05:02.284 END TEST app_cmdline 00:05:02.284 ************************************ 00:05:02.544 19:02:08 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:02.544 19:02:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.544 19:02:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.544 19:02:08 -- common/autotest_common.sh@10 -- # set +x 00:05:02.544 ************************************ 00:05:02.544 START TEST version 00:05:02.544 ************************************ 00:05:02.544 19:02:08 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:02.544 * Looking for test storage... 00:05:02.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:02.544 19:02:08 version -- app/version.sh@17 -- # get_header_version major 00:05:02.544 19:02:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:02.544 19:02:08 version -- app/version.sh@14 -- # cut -f2 00:05:02.544 19:02:08 version -- app/version.sh@14 -- # tr -d '"' 00:05:02.544 19:02:08 version -- app/version.sh@17 -- # major=24 00:05:02.544 19:02:08 version -- app/version.sh@18 -- # get_header_version minor 00:05:02.544 19:02:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:02.544 19:02:08 version -- app/version.sh@14 -- # cut -f2 00:05:02.544 19:02:08 version -- app/version.sh@14 -- # tr -d '"' 00:05:02.544 19:02:08 version -- app/version.sh@18 -- # minor=9 00:05:02.544 19:02:08 version -- app/version.sh@19 -- # get_header_version patch 00:05:02.544 19:02:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:02.544 19:02:08 version -- app/version.sh@14 -- # cut -f2 00:05:02.544 19:02:08 version -- app/version.sh@14 -- # tr -d '"' 00:05:02.544 19:02:08 version -- app/version.sh@19 -- # patch=0 00:05:02.544 19:02:08 version -- app/version.sh@20 -- # get_header_version suffix 00:05:02.544 19:02:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:02.544 19:02:08 version -- app/version.sh@14 -- # cut -f2 00:05:02.544 19:02:08 version -- app/version.sh@14 -- # tr -d '"' 00:05:02.544 19:02:08 version -- app/version.sh@20 -- # suffix=-pre 00:05:02.544 19:02:08 version -- app/version.sh@22 -- # version=24.9 00:05:02.544 19:02:08 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:02.544 19:02:08 version -- app/version.sh@28 -- # version=24.9rc0 00:05:02.544 19:02:08 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:02.544 19:02:08 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:02.544 19:02:08 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:02.544 19:02:08 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:02.544 00:05:02.544 real 0m0.116s 00:05:02.544 user 0m0.054s 00:05:02.544 sys 0m0.083s 00:05:02.544 19:02:08 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.544 19:02:08 version -- common/autotest_common.sh@10 -- # set +x 00:05:02.544 ************************************ 00:05:02.544 END TEST version 00:05:02.544 ************************************ 00:05:02.544 19:02:08 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:05:02.544 19:02:08 -- spdk/autotest.sh@202 -- # uname -s 00:05:02.544 19:02:08 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:05:02.544 19:02:08 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:05:02.544 19:02:08 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:05:02.544 19:02:08 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:05:02.544 19:02:08 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:05:02.544 19:02:08 -- spdk/autotest.sh@264 -- # timing_exit lib 00:05:02.544 19:02:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.544 19:02:08 -- common/autotest_common.sh@10 -- # set +x 00:05:02.544 19:02:08 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:05:02.544 19:02:08 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:05:02.544 19:02:08 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:05:02.544 19:02:08 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:05:02.544 19:02:08 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:05:02.544 19:02:08 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:05:02.544 19:02:08 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:02.544 19:02:08 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:02.544 19:02:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.544 19:02:08 -- common/autotest_common.sh@10 -- # set +x 00:05:02.544 ************************************ 00:05:02.544 START TEST nvmf_tcp 00:05:02.544 ************************************ 00:05:02.544 19:02:08 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:02.803 * Looking for test storage... 00:05:02.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:02.803 19:02:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:02.803 19:02:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:02.803 19:02:08 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:02.803 19:02:08 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:02.803 19:02:08 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.803 19:02:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.803 ************************************ 00:05:02.803 START TEST nvmf_target_core 00:05:02.803 ************************************ 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:02.803 * Looking for test storage... 00:05:02.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.803 19:02:08 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:02.804 ************************************ 00:05:02.804 START TEST nvmf_abort 00:05:02.804 ************************************ 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:02.804 * Looking for test storage... 00:05:02.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:05:02.804 19:02:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:04.716 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:04.716 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:05:04.716 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:04.716 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:04.716 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:04.716 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:04.716 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:04.716 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:05:04.716 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:04.716 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:05:04.716 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:05:04.716 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:05:04.716 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:05:04.716 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:05:04.716 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:05:04.716 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:04.716 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:04.716 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:04.716 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:04.716 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:04.716 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:04.716 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:05:04.717 Found 0000:08:00.0 (0x8086 - 0x159b) 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:05:04.717 Found 0000:08:00.1 (0x8086 - 0x159b) 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:05:04.717 Found net devices under 0000:08:00.0: cvl_0_0 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:05:04.717 Found net devices under 0000:08:00.1: cvl_0_1 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:04.717 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:04.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:04.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:05:04.717 00:05:04.717 --- 10.0.0.2 ping statistics --- 00:05:04.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:04.717 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:04.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:04.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:05:04.718 00:05:04.718 --- 10.0.0.1 ping statistics --- 00:05:04.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:04.718 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2475414 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2475414 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2475414 ']' 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:04.718 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:04.718 [2024-07-24 19:02:10.615354] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:05:04.718 [2024-07-24 19:02:10.615451] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:04.718 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.718 [2024-07-24 19:02:10.684512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:04.977 [2024-07-24 19:02:10.806385] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:04.977 [2024-07-24 19:02:10.806455] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:04.977 [2024-07-24 19:02:10.806471] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:04.977 [2024-07-24 19:02:10.806492] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:04.977 [2024-07-24 19:02:10.806504] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:04.977 [2024-07-24 19:02:10.806571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.977 [2024-07-24 19:02:10.806644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.977 [2024-07-24 19:02:10.806608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.977 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.977 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:05:04.977 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:04.977 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:04.977 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:04.977 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:04.977 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:04.977 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.977 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:04.977 [2024-07-24 19:02:10.940108] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:04.977 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.977 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:04.977 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.978 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:04.978 Malloc0 00:05:04.978 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.978 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:04.978 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.978 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:04.978 Delay0 00:05:04.978 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.978 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:04.978 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.978 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:05.237 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.237 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:05.237 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.237 19:02:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:05.237 19:02:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.237 19:02:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:05.237 19:02:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.237 19:02:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:05.237 [2024-07-24 19:02:11.006369] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:05.237 19:02:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.237 19:02:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:05.237 19:02:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.237 19:02:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:05.237 19:02:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.237 19:02:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:05.237 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.237 [2024-07-24 19:02:11.153607] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:07.772 Initializing NVMe Controllers 00:05:07.772 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:07.772 controller IO queue size 128 less than required 00:05:07.772 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:07.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:07.772 Initialization complete. Launching workers. 00:05:07.772 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 26587 00:05:07.772 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 26652, failed to submit 62 00:05:07.772 success 26591, unsuccess 61, failed 0 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:07.772 rmmod nvme_tcp 00:05:07.772 rmmod nvme_fabrics 00:05:07.772 rmmod nvme_keyring 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2475414 ']' 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2475414 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2475414 ']' 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2475414 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2475414 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2475414' 00:05:07.772 killing process with pid 2475414 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2475414 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2475414 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:07.772 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:07.773 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:09.683 00:05:09.683 real 0m6.865s 00:05:09.683 user 0m10.307s 00:05:09.683 sys 0m2.181s 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:09.683 ************************************ 00:05:09.683 END TEST nvmf_abort 00:05:09.683 ************************************ 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:09.683 ************************************ 00:05:09.683 START TEST nvmf_ns_hotplug_stress 00:05:09.683 ************************************ 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:09.683 * Looking for test storage... 00:05:09.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.683 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:05:09.684 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:05:11.589 Found 0000:08:00.0 (0x8086 - 0x159b) 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:05:11.589 Found 0000:08:00.1 (0x8086 - 0x159b) 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:05:11.589 Found net devices under 0000:08:00.0: cvl_0_0 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:11.589 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:05:11.590 Found net devices under 0000:08:00.1: cvl_0_1 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:11.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:11.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:05:11.590 00:05:11.590 --- 10.0.0.2 ping statistics --- 00:05:11.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:11.590 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:11.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:11.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:05:11.590 00:05:11.590 --- 10.0.0.1 ping statistics --- 00:05:11.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:11.590 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2477141 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2477141 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2477141 ']' 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.590 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:11.590 [2024-07-24 19:02:17.510601] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:05:11.590 [2024-07-24 19:02:17.510698] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:11.590 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.590 [2024-07-24 19:02:17.575966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.848 [2024-07-24 19:02:17.692254] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:11.848 [2024-07-24 19:02:17.692312] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:11.848 [2024-07-24 19:02:17.692327] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:11.848 [2024-07-24 19:02:17.692341] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:11.848 [2024-07-24 19:02:17.692353] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:11.848 [2024-07-24 19:02:17.692446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.848 [2024-07-24 19:02:17.692515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:11.848 [2024-07-24 19:02:17.692520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.848 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.848 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:05:11.848 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:11.848 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:11.848 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:11.848 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:11.848 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:11.848 19:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:12.108 [2024-07-24 19:02:18.106747] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.366 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:12.624 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:12.881 [2024-07-24 19:02:18.727694] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:12.881 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:13.139 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:13.397 Malloc0 00:05:13.397 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:13.655 Delay0 00:05:13.655 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.222 19:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:14.222 NULL1 00:05:14.480 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:14.737 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2477467 00:05:14.737 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:14.737 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:14.737 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.737 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.114 Read completed with error (sct=0, sc=11) 00:05:16.114 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.114 19:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:16.114 19:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:16.373 true 00:05:16.373 19:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:16.373 19:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.311 19:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.311 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.567 19:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:17.567 19:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:17.825 true 00:05:17.825 19:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:17.825 19:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.082 19:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.339 19:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:18.339 19:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:18.597 true 00:05:18.597 19:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:18.597 19:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.855 19:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.421 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:19.421 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:19.679 true 00:05:19.679 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:19.679 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.937 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.196 19:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:20.196 19:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:20.454 true 00:05:20.454 19:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:20.454 19:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.393 19:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.393 19:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:21.393 19:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:21.651 true 00:05:21.909 19:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:21.909 19:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.167 19:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.425 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:22.425 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:22.720 true 00:05:22.720 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:22.720 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.002 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.259 19:02:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:23.259 19:02:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:23.517 true 00:05:23.517 19:02:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:23.517 19:02:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.774 19:02:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.340 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:24.340 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:24.340 true 00:05:24.597 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:24.597 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.528 19:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.786 19:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:25.786 19:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:26.043 true 00:05:26.043 19:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:26.043 19:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.301 19:02:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.558 19:02:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:26.558 19:02:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:26.815 true 00:05:26.815 19:02:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:26.815 19:02:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.072 19:02:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.636 19:02:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:27.636 19:02:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:27.636 true 00:05:27.893 19:02:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:27.893 19:02:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.824 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.824 19:02:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.824 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.824 19:02:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:28.824 19:02:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:29.388 true 00:05:29.388 19:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:29.388 19:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.646 19:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.904 19:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:29.904 19:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:30.162 true 00:05:30.162 19:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:30.162 19:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.419 19:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.677 19:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:30.677 19:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:30.934 true 00:05:30.934 19:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:30.934 19:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.191 19:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.449 19:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:31.449 19:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:31.706 true 00:05:31.706 19:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:31.706 19:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.638 19:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.203 19:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:33.203 19:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:33.203 true 00:05:33.461 19:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:33.461 19:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.719 19:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.977 19:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:33.977 19:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:34.234 true 00:05:34.234 19:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:34.234 19:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.492 19:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.750 19:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:34.750 19:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:35.008 true 00:05:35.008 19:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:35.008 19:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.573 19:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.573 19:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:35.573 19:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:35.830 true 00:05:35.830 19:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:35.830 19:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.201 19:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.201 19:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:37.201 19:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:37.458 true 00:05:37.458 19:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:37.458 19:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.716 19:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.280 19:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:38.280 19:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:38.280 true 00:05:38.538 19:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:38.538 19:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.795 19:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.052 19:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:39.052 19:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:39.309 true 00:05:39.309 19:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:39.309 19:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.566 19:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.823 19:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:39.823 19:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:40.081 true 00:05:40.081 19:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:40.081 19:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.012 19:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.272 19:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:41.272 19:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:41.530 true 00:05:41.530 19:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:41.530 19:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.097 19:02:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.355 19:02:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:42.355 19:02:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:42.613 true 00:05:42.613 19:02:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:42.613 19:02:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.871 19:02:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.129 19:02:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:43.129 19:02:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:43.388 true 00:05:43.388 19:02:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:43.388 19:02:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.323 19:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.581 19:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:44.581 19:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:44.869 true 00:05:44.869 19:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:44.869 19:02:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.128 Initializing NVMe Controllers 00:05:45.128 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:45.128 Controller IO queue size 128, less than required. 00:05:45.128 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:45.128 Controller IO queue size 128, less than required. 00:05:45.128 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:45.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:45.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:45.128 Initialization complete. Launching workers. 00:05:45.128 ======================================================== 00:05:45.128 Latency(us) 00:05:45.128 Device Information : IOPS MiB/s Average min max 00:05:45.128 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 598.40 0.29 74416.38 3096.17 1016532.97 00:05:45.128 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6306.26 3.08 20299.56 4561.20 641773.29 00:05:45.128 ======================================================== 00:05:45.128 Total : 6904.66 3.37 24989.68 3096.17 1016532.97 00:05:45.128 00:05:45.128 19:02:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.386 19:02:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:45.386 19:02:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:45.643 true 00:05:45.643 19:02:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2477467 00:05:45.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2477467) - No such process 00:05:45.644 19:02:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2477467 00:05:45.644 19:02:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.213 19:02:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:46.213 19:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:46.213 19:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:46.213 19:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:46.213 19:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.213 19:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:46.780 null0 00:05:46.780 19:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.780 19:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.780 19:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:47.051 null1 00:05:47.051 19:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:47.051 19:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:47.051 19:02:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:47.315 null2 00:05:47.315 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:47.315 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:47.315 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:47.573 null3 00:05:47.573 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:47.573 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:47.573 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:47.831 null4 00:05:47.831 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:47.831 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:47.831 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:48.089 null5 00:05:48.089 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:48.089 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:48.089 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:48.347 null6 00:05:48.347 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:48.347 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:48.347 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:48.605 null7 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:48.605 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:48.606 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:48.606 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.606 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:48.606 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:48.606 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:48.606 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:48.606 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:48.606 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:48.606 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2480757 2480758 2480760 2480762 2480764 2480766 2480768 2480770 00:05:48.606 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:48.606 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.606 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:49.171 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:49.171 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.171 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:49.171 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:49.171 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:49.171 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:49.171 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:49.171 19:02:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:49.171 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.171 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.171 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.171 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.171 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:49.171 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:49.171 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.171 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.172 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:49.429 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.429 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.429 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:49.429 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.429 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.429 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:49.429 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.429 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.429 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:49.429 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.429 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.429 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:49.429 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.429 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.429 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:49.686 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:49.686 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.686 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:49.686 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:49.686 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:49.686 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:49.686 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:49.686 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.944 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:50.201 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.202 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:50.202 19:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:50.202 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:50.202 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:50.202 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:50.202 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:50.202 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:50.459 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.459 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.459 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:50.459 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.459 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.459 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:50.459 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.459 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.459 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:50.459 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.459 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.459 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:50.459 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.459 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.459 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:50.460 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.460 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.460 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:50.460 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.460 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.460 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:50.460 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.460 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.460 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:50.717 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.717 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:50.717 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:50.717 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:50.717 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:50.717 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:50.717 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:50.717 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:50.975 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.975 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.975 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:50.975 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.975 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.975 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:50.975 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.975 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.975 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:50.975 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.975 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.975 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:50.975 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.975 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.975 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:50.975 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.975 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.975 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:50.975 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.975 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.975 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:51.233 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.233 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.233 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:51.233 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:51.233 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.233 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:51.233 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:51.233 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:51.233 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:51.490 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:51.490 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.490 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:51.491 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.491 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:51.491 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.491 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.491 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:51.491 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.491 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.491 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:51.491 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.491 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.491 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:51.491 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.491 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.491 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:51.491 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.491 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.491 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:51.748 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.748 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.748 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:51.748 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.748 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.748 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:51.748 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.748 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:51.748 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:51.748 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:51.749 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:51.749 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:52.006 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:52.006 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:52.006 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.006 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.006 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:52.006 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.006 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.007 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:52.007 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.007 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.007 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:52.007 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.007 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.007 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:52.007 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.007 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.007 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:52.007 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.007 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.007 19:02:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:52.264 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.264 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.264 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:52.264 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.264 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.264 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:52.264 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.264 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:52.264 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:52.264 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:52.264 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:52.264 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:52.522 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:52.522 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:52.522 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.522 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.522 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:52.522 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.522 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.522 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:52.522 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.522 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.522 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:52.522 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.522 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.522 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:52.780 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.780 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.780 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:52.780 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.780 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.780 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:52.780 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.780 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.780 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:52.780 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:52.780 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.780 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:52.780 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.780 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:52.780 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:52.780 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:53.038 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:53.038 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:53.038 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:53.038 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:53.038 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.038 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.038 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:53.038 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.038 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.038 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:53.038 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.038 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.038 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:53.038 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.038 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.038 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:53.296 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.296 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.296 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:53.296 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.296 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.296 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:53.296 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.296 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.296 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:53.296 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.296 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.296 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:53.296 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.296 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:53.296 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:53.296 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:53.554 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:53.554 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:53.554 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:53.554 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:53.554 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.554 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.554 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:53.554 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.554 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.554 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:53.554 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.554 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.554 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:53.554 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.554 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.554 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:53.811 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.811 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.811 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:53.811 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.811 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.811 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:53.811 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.811 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.811 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:53.811 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.811 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.811 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.811 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:53.811 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:53.811 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:53.811 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:54.069 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:54.069 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:54.069 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:54.069 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:54.069 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.069 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.069 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.069 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.069 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.069 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:54.327 rmmod nvme_tcp 00:05:54.327 rmmod nvme_fabrics 00:05:54.327 rmmod nvme_keyring 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2477141 ']' 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2477141 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2477141 ']' 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2477141 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2477141 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2477141' 00:05:54.327 killing process with pid 2477141 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2477141 00:05:54.327 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2477141 00:05:54.585 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:54.585 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:54.585 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:54.585 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:54.585 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:54.585 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:54.585 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:54.585 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:57.178 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:57.178 00:05:57.178 real 0m46.945s 00:05:57.178 user 3m38.868s 00:05:57.178 sys 0m15.378s 00:05:57.178 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.178 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:57.178 ************************************ 00:05:57.178 END TEST nvmf_ns_hotplug_stress 00:05:57.178 ************************************ 00:05:57.178 19:03:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:57.178 19:03:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:57.178 19:03:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.178 19:03:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:57.178 ************************************ 00:05:57.178 START TEST nvmf_delete_subsystem 00:05:57.178 ************************************ 00:05:57.178 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:57.178 * Looking for test storage... 00:05:57.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:57.178 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.178 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:57.178 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:05:57.179 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:05:58.561 Found 0000:08:00.0 (0x8086 - 0x159b) 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:05:58.561 Found 0000:08:00.1 (0x8086 - 0x159b) 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:05:58.561 Found net devices under 0000:08:00.0: cvl_0_0 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:05:58.561 Found net devices under 0000:08:00.1: cvl_0_1 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:58.561 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:58.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:58.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:05:58.562 00:05:58.562 --- 10.0.0.2 ping statistics --- 00:05:58.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:58.562 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:58.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:58.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:05:58.562 00:05:58.562 --- 10.0.0.1 ping statistics --- 00:05:58.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:58.562 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2483051 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2483051 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2483051 ']' 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.562 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:58.822 [2024-07-24 19:03:04.597537] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:05:58.822 [2024-07-24 19:03:04.597651] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:58.822 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.822 [2024-07-24 19:03:04.664722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.822 [2024-07-24 19:03:04.780727] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:58.822 [2024-07-24 19:03:04.780788] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:58.822 [2024-07-24 19:03:04.780804] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:58.822 [2024-07-24 19:03:04.780817] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:58.822 [2024-07-24 19:03:04.780829] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:58.822 [2024-07-24 19:03:04.780927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.822 [2024-07-24 19:03:04.781002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:59.083 [2024-07-24 19:03:04.922038] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:59.083 [2024-07-24 19:03:04.938267] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:59.083 NULL1 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:59.083 Delay0 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2483082 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:59.083 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:59.083 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.083 [2024-07-24 19:03:05.023082] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:00.990 19:03:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:00.990 19:03:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.990 19:03:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 starting I/O failed: -6 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 [2024-07-24 19:03:07.244383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc830000c00 is same with the state(5) to be set 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 starting I/O failed: -6 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Read completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.250 Write completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Write completed with error (sct=0, sc=8) 00:06:01.251 starting I/O failed: -6 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Write completed with error (sct=0, sc=8) 00:06:01.251 Write completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 starting I/O failed: -6 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 starting I/O failed: -6 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Write completed with error (sct=0, sc=8) 00:06:01.251 Write completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 starting I/O failed: -6 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Write completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Write completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 starting I/O failed: -6 00:06:01.251 Write completed with error (sct=0, sc=8) 00:06:01.251 Write completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Write completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 starting I/O failed: -6 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 starting I/O failed: -6 00:06:01.251 Write completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Write completed with error (sct=0, sc=8) 00:06:01.251 Write completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 starting I/O failed: -6 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Write completed with error (sct=0, sc=8) 00:06:01.251 Write completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 starting I/O failed: -6 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 Read completed with error (sct=0, sc=8) 00:06:01.251 starting I/O failed: -6 00:06:01.251 starting I/O failed: -6 00:06:01.251 starting I/O failed: -6 00:06:02.189 [2024-07-24 19:03:08.199989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c600 is same with the state(5) to be set 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 [2024-07-24 19:03:08.246367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bcfa0 is same with the state(5) to be set 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 [2024-07-24 19:03:08.246623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc83000d7c0 is same with the state(5) to be set 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 [2024-07-24 19:03:08.246968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bf130 is same with the state(5) to be set 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Write completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 Read completed with error (sct=0, sc=8) 00:06:02.450 [2024-07-24 19:03:08.247165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc83000d000 is same with the state(5) to be set 00:06:02.450 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.450 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:02.450 Initializing NVMe Controllers 00:06:02.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:02.450 Controller IO queue size 128, less than required. 00:06:02.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:02.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:02.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:02.450 Initialization complete. Launching workers. 00:06:02.450 ======================================================== 00:06:02.450 Latency(us) 00:06:02.450 Device Information : IOPS MiB/s Average min max 00:06:02.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.99 0.09 902625.87 817.28 1012933.62 00:06:02.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.69 0.08 909595.37 691.34 1014271.17 00:06:02.450 ======================================================== 00:06:02.450 Total : 351.68 0.17 905869.79 691.34 1014271.17 00:06:02.450 00:06:02.450 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2483082 00:06:02.450 [2024-07-24 19:03:08.248068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129c600 (9): Bad file descriptor 00:06:02.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:02.450 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2483082 00:06:03.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2483082) - No such process 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2483082 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2483082 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2483082 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:03.018 [2024-07-24 19:03:08.768908] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2483703 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2483703 00:06:03.018 19:03:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:03.018 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.018 [2024-07-24 19:03:08.836121] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:03.276 19:03:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:03.276 19:03:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2483703 00:06:03.276 19:03:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:03.840 19:03:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:03.840 19:03:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2483703 00:06:03.840 19:03:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:04.405 19:03:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:04.405 19:03:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2483703 00:06:04.405 19:03:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:04.970 19:03:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:04.970 19:03:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2483703 00:06:04.970 19:03:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:05.535 19:03:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:05.535 19:03:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2483703 00:06:05.535 19:03:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:05.793 19:03:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:05.793 19:03:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2483703 00:06:05.793 19:03:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:06.357 Initializing NVMe Controllers 00:06:06.357 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:06.357 Controller IO queue size 128, less than required. 00:06:06.357 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:06.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:06.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:06.357 Initialization complete. Launching workers. 00:06:06.357 ======================================================== 00:06:06.357 Latency(us) 00:06:06.357 Device Information : IOPS MiB/s Average min max 00:06:06.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004758.65 1000228.78 1041159.52 00:06:06.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004800.82 1000212.62 1013836.75 00:06:06.357 ======================================================== 00:06:06.357 Total : 256.00 0.12 1004779.73 1000212.62 1041159.52 00:06:06.357 00:06:06.357 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:06.357 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2483703 00:06:06.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2483703) - No such process 00:06:06.357 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2483703 00:06:06.357 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:06.357 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:06.357 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:06.357 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:06:06.357 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:06.357 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:06:06.357 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:06.357 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:06.357 rmmod nvme_tcp 00:06:06.357 rmmod nvme_fabrics 00:06:06.357 rmmod nvme_keyring 00:06:06.357 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:06.357 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:06:06.357 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:06:06.357 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2483051 ']' 00:06:06.357 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2483051 00:06:06.357 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2483051 ']' 00:06:06.357 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2483051 00:06:06.357 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:06:06.357 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.357 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2483051 00:06:06.615 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.615 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.615 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2483051' 00:06:06.615 killing process with pid 2483051 00:06:06.615 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2483051 00:06:06.615 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2483051 00:06:06.615 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:06.615 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:06.615 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:06.615 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:06.615 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:06.615 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:06.615 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:06.615 19:03:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:09.162 00:06:09.162 real 0m12.016s 00:06:09.162 user 0m27.882s 00:06:09.162 sys 0m2.776s 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.162 ************************************ 00:06:09.162 END TEST nvmf_delete_subsystem 00:06:09.162 ************************************ 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:09.162 ************************************ 00:06:09.162 START TEST nvmf_host_management 00:06:09.162 ************************************ 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:09.162 * Looking for test storage... 00:06:09.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:06:09.162 19:03:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:06:10.539 Found 0000:08:00.0 (0x8086 - 0x159b) 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:06:10.539 Found 0000:08:00.1 (0x8086 - 0x159b) 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:06:10.539 Found net devices under 0000:08:00.0: cvl_0_0 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:06:10.539 Found net devices under 0000:08:00.1: cvl_0_1 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:10.539 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:10.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:10.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:06:10.540 00:06:10.540 --- 10.0.0.2 ping statistics --- 00:06:10.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:10.540 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:10.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:10.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:06:10.540 00:06:10.540 --- 10.0.0.1 ping statistics --- 00:06:10.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:10.540 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2485725 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2485725 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2485725 ']' 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.540 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.540 [2024-07-24 19:03:16.517752] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:06:10.540 [2024-07-24 19:03:16.517850] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:10.540 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.798 [2024-07-24 19:03:16.584249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:10.798 [2024-07-24 19:03:16.703651] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:10.798 [2024-07-24 19:03:16.703711] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:10.798 [2024-07-24 19:03:16.703728] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:10.798 [2024-07-24 19:03:16.703742] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:10.798 [2024-07-24 19:03:16.703753] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:10.798 [2024-07-24 19:03:16.703840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.798 [2024-07-24 19:03:16.703924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.798 [2024-07-24 19:03:16.703975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:10.798 [2024-07-24 19:03:16.703979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.056 [2024-07-24 19:03:16.842783] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.056 Malloc0 00:06:11.056 [2024-07-24 19:03:16.901263] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2485851 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2485851 /var/tmp/bdevperf.sock 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2485851 ']' 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:11.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:06:11.056 { 00:06:11.056 "params": { 00:06:11.056 "name": "Nvme$subsystem", 00:06:11.056 "trtype": "$TEST_TRANSPORT", 00:06:11.056 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:11.056 "adrfam": "ipv4", 00:06:11.056 "trsvcid": "$NVMF_PORT", 00:06:11.056 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:11.056 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:11.056 "hdgst": ${hdgst:-false}, 00:06:11.056 "ddgst": ${ddgst:-false} 00:06:11.056 }, 00:06:11.056 "method": "bdev_nvme_attach_controller" 00:06:11.056 } 00:06:11.056 EOF 00:06:11.056 )") 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:06:11.056 19:03:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:06:11.056 "params": { 00:06:11.056 "name": "Nvme0", 00:06:11.056 "trtype": "tcp", 00:06:11.056 "traddr": "10.0.0.2", 00:06:11.056 "adrfam": "ipv4", 00:06:11.056 "trsvcid": "4420", 00:06:11.056 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:11.056 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:11.056 "hdgst": false, 00:06:11.056 "ddgst": false 00:06:11.056 }, 00:06:11.056 "method": "bdev_nvme_attach_controller" 00:06:11.056 }' 00:06:11.056 [2024-07-24 19:03:16.986850] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:06:11.056 [2024-07-24 19:03:16.986941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2485851 ] 00:06:11.056 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.056 [2024-07-24 19:03:17.048215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.314 [2024-07-24 19:03:17.165387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.572 Running I/O for 10 seconds... 00:06:11.572 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.573 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:11.573 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:11.573 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.573 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.573 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.573 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:11.573 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:11.573 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:11.573 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:11.573 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:11.573 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:11.573 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:11.573 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:11.573 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:11.573 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:11.573 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.573 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.573 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.573 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:11.573 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:11.573 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:11.832 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:11.832 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:11.832 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:11.832 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:11.832 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.832 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.832 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.832 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=462 00:06:11.833 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 462 -ge 100 ']' 00:06:11.833 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:11.833 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:11.833 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:11.833 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:11.833 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.833 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.833 [2024-07-24 19:03:17.752051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.752530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x725500 is same with the state(5) to be set 00:06:11.833 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.833 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:11.833 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.833 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.833 [2024-07-24 19:03:17.763136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:11.833 [2024-07-24 19:03:17.763181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.833 [2024-07-24 19:03:17.763200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:11.833 [2024-07-24 19:03:17.763215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.833 [2024-07-24 19:03:17.763231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:11.833 [2024-07-24 19:03:17.763245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.833 [2024-07-24 19:03:17.763261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:11.833 [2024-07-24 19:03:17.763275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.833 [2024-07-24 19:03:17.763289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea38d0 is same with the state(5) to be set 00:06:11.833 [2024-07-24 19:03:17.763390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.833 [2024-07-24 19:03:17.763413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.833 [2024-07-24 19:03:17.763441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.833 [2024-07-24 19:03:17.763457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.833 [2024-07-24 19:03:17.763475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.833 [2024-07-24 19:03:17.763498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.833 [2024-07-24 19:03:17.763516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.833 [2024-07-24 19:03:17.763531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.833 [2024-07-24 19:03:17.763556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.833 [2024-07-24 19:03:17.763571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.833 [2024-07-24 19:03:17.763588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.833 [2024-07-24 19:03:17.763603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.833 [2024-07-24 19:03:17.763619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.833 [2024-07-24 19:03:17.763634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.833 [2024-07-24 19:03:17.763657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.833 [2024-07-24 19:03:17.763673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.833 [2024-07-24 19:03:17.763690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.833 [2024-07-24 19:03:17.763704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.833 [2024-07-24 19:03:17.763721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.833 [2024-07-24 19:03:17.763736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.833 [2024-07-24 19:03:17.763753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.833 [2024-07-24 19:03:17.763768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.833 [2024-07-24 19:03:17.763784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.833 [2024-07-24 19:03:17.763799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.833 [2024-07-24 19:03:17.763816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.833 [2024-07-24 19:03:17.763830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.833 [2024-07-24 19:03:17.763847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.833 [2024-07-24 19:03:17.763862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.833 [2024-07-24 19:03:17.763878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.763893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.763910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.763925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.763942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.763957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.763973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.763988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.834 [2024-07-24 19:03:17.764214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 19:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:11.834 [2024-07-24 19:03:17.764374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.764984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.764999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.765016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.765030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.834 [2024-07-24 19:03:17.765047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.834 [2024-07-24 19:03:17.765062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.835 [2024-07-24 19:03:17.765080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.835 [2024-07-24 19:03:17.765096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.835 [2024-07-24 19:03:17.765112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.835 [2024-07-24 19:03:17.765127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.835 [2024-07-24 19:03:17.765144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.835 [2024-07-24 19:03:17.765158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.835 [2024-07-24 19:03:17.765175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.835 [2024-07-24 19:03:17.765189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.835 [2024-07-24 19:03:17.765206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.835 [2024-07-24 19:03:17.765221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.835 [2024-07-24 19:03:17.765237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.835 [2024-07-24 19:03:17.765256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.835 [2024-07-24 19:03:17.765273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.835 [2024-07-24 19:03:17.765288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.835 [2024-07-24 19:03:17.765305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.835 [2024-07-24 19:03:17.765320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.835 [2024-07-24 19:03:17.765337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.835 [2024-07-24 19:03:17.765351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.835 [2024-07-24 19:03:17.765368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.835 [2024-07-24 19:03:17.765384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.835 [2024-07-24 19:03:17.765401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.835 [2024-07-24 19:03:17.765417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.835 [2024-07-24 19:03:17.765434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.835 [2024-07-24 19:03:17.765448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.835 [2024-07-24 19:03:17.765467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.835 [2024-07-24 19:03:17.765490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.835 [2024-07-24 19:03:17.765574] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12d53d0 was disconnected and freed. reset controller. 00:06:11.835 [2024-07-24 19:03:17.766857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:06:11.835 task offset: 69888 on job bdev=Nvme0n1 fails 00:06:11.835 00:06:11.835 Latency(us) 00:06:11.835 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:11.835 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:11.835 Job: Nvme0n1 ended in about 0.43 seconds with error 00:06:11.835 Verification LBA range: start 0x0 length 0x400 00:06:11.835 Nvme0n1 : 0.43 1272.45 79.53 149.15 0.00 43534.52 2888.44 42137.22 00:06:11.835 =================================================================================================================== 00:06:11.835 Total : 1272.45 79.53 149.15 0.00 43534.52 2888.44 42137.22 00:06:11.835 [2024-07-24 19:03:17.769158] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.835 [2024-07-24 19:03:17.769190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea38d0 (9): Bad file descriptor 00:06:11.835 [2024-07-24 19:03:17.779565] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:12.767 19:03:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2485851 00:06:12.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2485851) - No such process 00:06:12.767 19:03:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:12.767 19:03:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:12.767 19:03:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:12.767 19:03:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:12.767 19:03:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:06:12.767 19:03:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:06:12.767 19:03:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:06:12.767 19:03:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:06:12.767 { 00:06:12.767 "params": { 00:06:12.767 "name": "Nvme$subsystem", 00:06:12.767 "trtype": "$TEST_TRANSPORT", 00:06:12.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:12.767 "adrfam": "ipv4", 00:06:12.767 "trsvcid": "$NVMF_PORT", 00:06:12.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:12.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:12.767 "hdgst": ${hdgst:-false}, 00:06:12.767 "ddgst": ${ddgst:-false} 00:06:12.767 }, 00:06:12.767 "method": "bdev_nvme_attach_controller" 00:06:12.767 } 00:06:12.767 EOF 00:06:12.767 )") 00:06:12.767 19:03:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:06:12.767 19:03:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:06:12.767 19:03:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:06:12.767 19:03:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:06:12.767 "params": { 00:06:12.767 "name": "Nvme0", 00:06:12.767 "trtype": "tcp", 00:06:12.767 "traddr": "10.0.0.2", 00:06:12.767 "adrfam": "ipv4", 00:06:12.767 "trsvcid": "4420", 00:06:12.767 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:12.767 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:12.767 "hdgst": false, 00:06:12.767 "ddgst": false 00:06:12.767 }, 00:06:12.767 "method": "bdev_nvme_attach_controller" 00:06:12.767 }' 00:06:13.026 [2024-07-24 19:03:18.816925] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:06:13.026 [2024-07-24 19:03:18.817019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2486064 ] 00:06:13.026 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.026 [2024-07-24 19:03:18.878828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.026 [2024-07-24 19:03:18.998631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.286 Running I/O for 1 seconds... 00:06:14.223 00:06:14.223 Latency(us) 00:06:14.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:14.223 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:14.223 Verification LBA range: start 0x0 length 0x400 00:06:14.223 Nvme0n1 : 1.02 1385.68 86.61 0.00 0.00 45316.64 7475.96 39224.51 00:06:14.223 =================================================================================================================== 00:06:14.223 Total : 1385.68 86.61 0.00 0.00 45316.64 7475.96 39224.51 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:14.483 rmmod nvme_tcp 00:06:14.483 rmmod nvme_fabrics 00:06:14.483 rmmod nvme_keyring 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2485725 ']' 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2485725 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2485725 ']' 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2485725 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2485725 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2485725' 00:06:14.483 killing process with pid 2485725 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2485725 00:06:14.483 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2485725 00:06:14.743 [2024-07-24 19:03:20.694414] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:14.743 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:14.743 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:14.743 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:14.743 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:14.743 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:14.743 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:14.743 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:14.743 19:03:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:17.280 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:17.280 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:17.280 00:06:17.280 real 0m8.085s 00:06:17.280 user 0m18.853s 00:06:17.280 sys 0m2.267s 00:06:17.280 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.280 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.280 ************************************ 00:06:17.280 END TEST nvmf_host_management 00:06:17.280 ************************************ 00:06:17.280 19:03:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:17.280 19:03:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:17.280 19:03:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.280 19:03:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:17.280 ************************************ 00:06:17.280 START TEST nvmf_lvol 00:06:17.280 ************************************ 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:17.281 * Looking for test storage... 00:06:17.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:06:17.281 19:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:18.659 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:06:18.660 Found 0000:08:00.0 (0x8086 - 0x159b) 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:06:18.660 Found 0000:08:00.1 (0x8086 - 0x159b) 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:06:18.660 Found net devices under 0000:08:00.0: cvl_0_0 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:06:18.660 Found net devices under 0000:08:00.1: cvl_0_1 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:18.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:18.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:06:18.660 00:06:18.660 --- 10.0.0.2 ping statistics --- 00:06:18.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:18.660 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:18.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:18.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:06:18.660 00:06:18.660 --- 10.0.0.1 ping statistics --- 00:06:18.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:18.660 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:18.660 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:18.920 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:18.920 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:18.920 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:18.920 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:18.920 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2487677 00:06:18.920 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:18.920 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2487677 00:06:18.920 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2487677 ']' 00:06:18.920 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.920 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.920 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.920 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.920 19:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:18.920 [2024-07-24 19:03:24.740278] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:06:18.920 [2024-07-24 19:03:24.740376] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:18.920 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.920 [2024-07-24 19:03:24.806622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.920 [2024-07-24 19:03:24.926474] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:18.920 [2024-07-24 19:03:24.926560] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:18.920 [2024-07-24 19:03:24.926576] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:18.920 [2024-07-24 19:03:24.926595] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:18.920 [2024-07-24 19:03:24.926607] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:18.920 [2024-07-24 19:03:24.926698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.920 [2024-07-24 19:03:24.926781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.920 [2024-07-24 19:03:24.926814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.179 19:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.179 19:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:06:19.179 19:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:19.179 19:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:19.179 19:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:19.179 19:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:19.179 19:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:19.436 [2024-07-24 19:03:25.341394] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:19.436 19:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:19.692 19:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:19.692 19:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:20.256 19:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:20.256 19:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:20.514 19:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:20.772 19:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8586e41e-bd44-441d-82d9-a21fe9db82eb 00:06:20.772 19:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8586e41e-bd44-441d-82d9-a21fe9db82eb lvol 20 00:06:21.030 19:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c4d9a8e6-8c65-4a2e-8395-aabd8670f7a0 00:06:21.030 19:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:21.288 19:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c4d9a8e6-8c65-4a2e-8395-aabd8670f7a0 00:06:21.545 19:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:21.803 [2024-07-24 19:03:27.779291] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:21.803 19:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:22.372 19:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2488017 00:06:22.372 19:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:22.372 19:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:22.372 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.306 19:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c4d9a8e6-8c65-4a2e-8395-aabd8670f7a0 MY_SNAPSHOT 00:06:23.565 19:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3585c7b0-0bdd-4d80-9958-5f656f508efb 00:06:23.565 19:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c4d9a8e6-8c65-4a2e-8395-aabd8670f7a0 30 00:06:23.823 19:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3585c7b0-0bdd-4d80-9958-5f656f508efb MY_CLONE 00:06:24.391 19:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e652dbbe-a588-470f-b51d-03926e40b98a 00:06:24.391 19:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e652dbbe-a588-470f-b51d-03926e40b98a 00:06:24.961 19:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2488017 00:06:33.087 Initializing NVMe Controllers 00:06:33.087 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:33.087 Controller IO queue size 128, less than required. 00:06:33.087 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:33.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:33.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:33.087 Initialization complete. Launching workers. 00:06:33.087 ======================================================== 00:06:33.087 Latency(us) 00:06:33.087 Device Information : IOPS MiB/s Average min max 00:06:33.087 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9730.40 38.01 13158.50 1248.74 82243.47 00:06:33.087 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9462.00 36.96 13532.40 2756.07 78976.60 00:06:33.087 ======================================================== 00:06:33.087 Total : 19192.40 74.97 13342.83 1248.74 82243.47 00:06:33.087 00:06:33.087 19:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:33.087 19:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c4d9a8e6-8c65-4a2e-8395-aabd8670f7a0 00:06:33.345 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8586e41e-bd44-441d-82d9-a21fe9db82eb 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:33.603 rmmod nvme_tcp 00:06:33.603 rmmod nvme_fabrics 00:06:33.603 rmmod nvme_keyring 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2487677 ']' 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2487677 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2487677 ']' 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2487677 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2487677 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2487677' 00:06:33.603 killing process with pid 2487677 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2487677 00:06:33.603 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2487677 00:06:33.863 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:33.863 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:33.863 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:33.863 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:33.863 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:33.863 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.863 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:33.863 19:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.827 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:35.827 00:06:35.827 real 0m18.994s 00:06:35.827 user 1m6.060s 00:06:35.827 sys 0m5.408s 00:06:35.827 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.827 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:35.827 ************************************ 00:06:35.827 END TEST nvmf_lvol 00:06:35.827 ************************************ 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:36.086 ************************************ 00:06:36.086 START TEST nvmf_lvs_grow 00:06:36.086 ************************************ 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:36.086 * Looking for test storage... 00:06:36.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:06:36.086 19:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:37.996 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:06:37.997 Found 0000:08:00.0 (0x8086 - 0x159b) 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:06:37.997 Found 0000:08:00.1 (0x8086 - 0x159b) 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:06:37.997 Found net devices under 0000:08:00.0: cvl_0_0 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:06:37.997 Found net devices under 0000:08:00.1: cvl_0_1 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:37.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:37.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:06:37.997 00:06:37.997 --- 10.0.0.2 ping statistics --- 00:06:37.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.997 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:37.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:37.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:06:37.997 00:06:37.997 --- 10.0.0.1 ping statistics --- 00:06:37.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.997 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2490546 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2490546 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2490546 ']' 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.997 19:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:37.997 [2024-07-24 19:03:43.717579] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:06:37.998 [2024-07-24 19:03:43.717681] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:37.998 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.998 [2024-07-24 19:03:43.784438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.998 [2024-07-24 19:03:43.903137] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:37.998 [2024-07-24 19:03:43.903205] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:37.998 [2024-07-24 19:03:43.903221] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:37.998 [2024-07-24 19:03:43.903234] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:37.998 [2024-07-24 19:03:43.903245] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:37.998 [2024-07-24 19:03:43.903277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.256 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.256 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:06:38.256 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:38.256 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:38.256 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:38.256 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:38.256 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:38.515 [2024-07-24 19:03:44.311926] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.515 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:38.515 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.515 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.515 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:38.515 ************************************ 00:06:38.515 START TEST lvs_grow_clean 00:06:38.515 ************************************ 00:06:38.515 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:06:38.515 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:38.515 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:38.515 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:38.515 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:38.515 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:38.515 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:38.515 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:38.515 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:38.515 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:38.773 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:38.773 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:39.032 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=49a23213-93da-4ff8-bb5e-4c1f399e981e 00:06:39.032 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49a23213-93da-4ff8-bb5e-4c1f399e981e 00:06:39.032 19:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:39.290 19:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:39.290 19:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:39.290 19:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 49a23213-93da-4ff8-bb5e-4c1f399e981e lvol 150 00:06:39.858 19:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8ecd2bbf-04b9-42ad-9b2d-5a60797d9df8 00:06:39.858 19:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:39.858 19:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:39.858 [2024-07-24 19:03:45.855231] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:39.858 [2024-07-24 19:03:45.855314] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:39.858 true 00:06:40.117 19:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:40.117 19:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49a23213-93da-4ff8-bb5e-4c1f399e981e 00:06:40.117 19:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:40.117 19:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:40.375 19:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8ecd2bbf-04b9-42ad-9b2d-5a60797d9df8 00:06:40.632 19:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:40.890 [2024-07-24 19:03:46.838402] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:40.890 19:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:41.148 19:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2490890 00:06:41.148 19:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:41.148 19:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2490890 /var/tmp/bdevperf.sock 00:06:41.148 19:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2490890 ']' 00:06:41.148 19:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:41.148 19:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:41.148 19:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.148 19:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:41.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:41.148 19:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.148 19:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:41.148 [2024-07-24 19:03:47.143535] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:06:41.148 [2024-07-24 19:03:47.143621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2490890 ] 00:06:41.407 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.407 [2024-07-24 19:03:47.199349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.407 [2024-07-24 19:03:47.316674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.407 19:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.407 19:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:06:41.407 19:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:41.973 Nvme0n1 00:06:41.973 19:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:42.232 [ 00:06:42.232 { 00:06:42.232 "name": "Nvme0n1", 00:06:42.232 "aliases": [ 00:06:42.232 "8ecd2bbf-04b9-42ad-9b2d-5a60797d9df8" 00:06:42.232 ], 00:06:42.232 "product_name": "NVMe disk", 00:06:42.232 "block_size": 4096, 00:06:42.232 "num_blocks": 38912, 00:06:42.232 "uuid": "8ecd2bbf-04b9-42ad-9b2d-5a60797d9df8", 00:06:42.232 "assigned_rate_limits": { 00:06:42.232 "rw_ios_per_sec": 0, 00:06:42.232 "rw_mbytes_per_sec": 0, 00:06:42.232 "r_mbytes_per_sec": 0, 00:06:42.232 "w_mbytes_per_sec": 0 00:06:42.232 }, 00:06:42.232 "claimed": false, 00:06:42.232 "zoned": false, 00:06:42.232 "supported_io_types": { 00:06:42.232 "read": true, 00:06:42.232 "write": true, 00:06:42.232 "unmap": true, 00:06:42.232 "flush": true, 00:06:42.232 "reset": true, 00:06:42.232 "nvme_admin": true, 00:06:42.232 "nvme_io": true, 00:06:42.232 "nvme_io_md": false, 00:06:42.232 "write_zeroes": true, 00:06:42.232 "zcopy": false, 00:06:42.232 "get_zone_info": false, 00:06:42.232 "zone_management": false, 00:06:42.233 "zone_append": false, 00:06:42.233 "compare": true, 00:06:42.233 "compare_and_write": true, 00:06:42.233 "abort": true, 00:06:42.233 "seek_hole": false, 00:06:42.233 "seek_data": false, 00:06:42.233 "copy": true, 00:06:42.233 "nvme_iov_md": false 00:06:42.233 }, 00:06:42.233 "memory_domains": [ 00:06:42.233 { 00:06:42.233 "dma_device_id": "system", 00:06:42.233 "dma_device_type": 1 00:06:42.233 } 00:06:42.233 ], 00:06:42.233 "driver_specific": { 00:06:42.233 "nvme": [ 00:06:42.233 { 00:06:42.233 "trid": { 00:06:42.233 "trtype": "TCP", 00:06:42.233 "adrfam": "IPv4", 00:06:42.233 "traddr": "10.0.0.2", 00:06:42.233 "trsvcid": "4420", 00:06:42.233 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:42.233 }, 00:06:42.233 "ctrlr_data": { 00:06:42.233 "cntlid": 1, 00:06:42.233 "vendor_id": "0x8086", 00:06:42.233 "model_number": "SPDK bdev Controller", 00:06:42.233 "serial_number": "SPDK0", 00:06:42.233 "firmware_revision": "24.09", 00:06:42.233 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:42.233 "oacs": { 00:06:42.233 "security": 0, 00:06:42.233 "format": 0, 00:06:42.233 "firmware": 0, 00:06:42.233 "ns_manage": 0 00:06:42.233 }, 00:06:42.233 "multi_ctrlr": true, 00:06:42.233 "ana_reporting": false 00:06:42.233 }, 00:06:42.233 "vs": { 00:06:42.233 "nvme_version": "1.3" 00:06:42.233 }, 00:06:42.233 "ns_data": { 00:06:42.233 "id": 1, 00:06:42.233 "can_share": true 00:06:42.233 } 00:06:42.233 } 00:06:42.233 ], 00:06:42.233 "mp_policy": "active_passive" 00:06:42.233 } 00:06:42.233 } 00:06:42.233 ] 00:06:42.233 19:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2490995 00:06:42.233 19:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:42.233 19:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:42.233 Running I/O for 10 seconds... 00:06:43.613 Latency(us) 00:06:43.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:43.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:43.613 Nvme0n1 : 1.00 13590.00 53.09 0.00 0.00 0.00 0.00 0.00 00:06:43.613 =================================================================================================================== 00:06:43.613 Total : 13590.00 53.09 0.00 0.00 0.00 0.00 0.00 00:06:43.613 00:06:44.182 19:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 49a23213-93da-4ff8-bb5e-4c1f399e981e 00:06:44.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:44.440 Nvme0n1 : 2.00 13657.50 53.35 0.00 0.00 0.00 0.00 0.00 00:06:44.440 =================================================================================================================== 00:06:44.440 Total : 13657.50 53.35 0.00 0.00 0.00 0.00 0.00 00:06:44.440 00:06:44.440 true 00:06:44.440 19:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49a23213-93da-4ff8-bb5e-4c1f399e981e 00:06:44.440 19:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:44.701 19:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:44.701 19:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:44.701 19:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2490995 00:06:45.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:45.270 Nvme0n1 : 3.00 13719.33 53.59 0.00 0.00 0.00 0.00 0.00 00:06:45.270 =================================================================================================================== 00:06:45.270 Total : 13719.33 53.59 0.00 0.00 0.00 0.00 0.00 00:06:45.270 00:06:46.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:46.208 Nvme0n1 : 4.00 13782.00 53.84 0.00 0.00 0.00 0.00 0.00 00:06:46.208 =================================================================================================================== 00:06:46.208 Total : 13782.00 53.84 0.00 0.00 0.00 0.00 0.00 00:06:46.208 00:06:47.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:47.585 Nvme0n1 : 5.00 13813.00 53.96 0.00 0.00 0.00 0.00 0.00 00:06:47.585 =================================================================================================================== 00:06:47.585 Total : 13813.00 53.96 0.00 0.00 0.00 0.00 0.00 00:06:47.585 00:06:48.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:48.521 Nvme0n1 : 6.00 13851.00 54.11 0.00 0.00 0.00 0.00 0.00 00:06:48.521 =================================================================================================================== 00:06:48.521 Total : 13851.00 54.11 0.00 0.00 0.00 0.00 0.00 00:06:48.521 00:06:49.460 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:49.460 Nvme0n1 : 7.00 13878.57 54.21 0.00 0.00 0.00 0.00 0.00 00:06:49.460 =================================================================================================================== 00:06:49.460 Total : 13878.57 54.21 0.00 0.00 0.00 0.00 0.00 00:06:49.460 00:06:50.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:50.397 Nvme0n1 : 8.00 13898.88 54.29 0.00 0.00 0.00 0.00 0.00 00:06:50.397 =================================================================================================================== 00:06:50.397 Total : 13898.88 54.29 0.00 0.00 0.00 0.00 0.00 00:06:50.397 00:06:51.335 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:51.335 Nvme0n1 : 9.00 13928.11 54.41 0.00 0.00 0.00 0.00 0.00 00:06:51.335 =================================================================================================================== 00:06:51.335 Total : 13928.11 54.41 0.00 0.00 0.00 0.00 0.00 00:06:51.335 00:06:52.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:52.272 Nvme0n1 : 10.00 13939.40 54.45 0.00 0.00 0.00 0.00 0.00 00:06:52.272 =================================================================================================================== 00:06:52.272 Total : 13939.40 54.45 0.00 0.00 0.00 0.00 0.00 00:06:52.272 00:06:52.272 00:06:52.272 Latency(us) 00:06:52.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:52.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:52.272 Nvme0n1 : 10.00 13947.11 54.48 0.00 0.00 9172.09 2390.85 17767.54 00:06:52.272 =================================================================================================================== 00:06:52.272 Total : 13947.11 54.48 0.00 0.00 9172.09 2390.85 17767.54 00:06:52.272 0 00:06:52.272 19:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2490890 00:06:52.272 19:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2490890 ']' 00:06:52.272 19:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2490890 00:06:52.272 19:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:06:52.272 19:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.272 19:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2490890 00:06:52.272 19:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:52.272 19:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:52.272 19:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2490890' 00:06:52.272 killing process with pid 2490890 00:06:52.272 19:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2490890 00:06:52.272 Received shutdown signal, test time was about 10.000000 seconds 00:06:52.272 00:06:52.272 Latency(us) 00:06:52.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:52.272 =================================================================================================================== 00:06:52.272 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:52.272 19:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2490890 00:06:52.529 19:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:52.787 19:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:53.353 19:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:53.353 19:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49a23213-93da-4ff8-bb5e-4c1f399e981e 00:06:53.611 19:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:53.611 19:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:06:53.611 19:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:53.868 [2024-07-24 19:03:59.679592] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:53.868 19:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49a23213-93da-4ff8-bb5e-4c1f399e981e 00:06:53.868 19:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:06:53.868 19:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49a23213-93da-4ff8-bb5e-4c1f399e981e 00:06:53.868 19:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:53.868 19:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.869 19:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:53.869 19:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.869 19:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:53.869 19:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.869 19:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:53.869 19:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:53.869 19:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49a23213-93da-4ff8-bb5e-4c1f399e981e 00:06:54.126 request: 00:06:54.126 { 00:06:54.126 "uuid": "49a23213-93da-4ff8-bb5e-4c1f399e981e", 00:06:54.126 "method": "bdev_lvol_get_lvstores", 00:06:54.126 "req_id": 1 00:06:54.126 } 00:06:54.126 Got JSON-RPC error response 00:06:54.126 response: 00:06:54.127 { 00:06:54.127 "code": -19, 00:06:54.127 "message": "No such device" 00:06:54.127 } 00:06:54.127 19:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:06:54.127 19:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.127 19:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:54.127 19:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.127 19:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:54.384 aio_bdev 00:06:54.384 19:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8ecd2bbf-04b9-42ad-9b2d-5a60797d9df8 00:06:54.384 19:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=8ecd2bbf-04b9-42ad-9b2d-5a60797d9df8 00:06:54.384 19:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:54.384 19:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:06:54.384 19:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:54.384 19:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:54.384 19:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:54.641 19:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8ecd2bbf-04b9-42ad-9b2d-5a60797d9df8 -t 2000 00:06:54.901 [ 00:06:54.901 { 00:06:54.901 "name": "8ecd2bbf-04b9-42ad-9b2d-5a60797d9df8", 00:06:54.901 "aliases": [ 00:06:54.901 "lvs/lvol" 00:06:54.901 ], 00:06:54.901 "product_name": "Logical Volume", 00:06:54.901 "block_size": 4096, 00:06:54.901 "num_blocks": 38912, 00:06:54.901 "uuid": "8ecd2bbf-04b9-42ad-9b2d-5a60797d9df8", 00:06:54.901 "assigned_rate_limits": { 00:06:54.901 "rw_ios_per_sec": 0, 00:06:54.901 "rw_mbytes_per_sec": 0, 00:06:54.901 "r_mbytes_per_sec": 0, 00:06:54.901 "w_mbytes_per_sec": 0 00:06:54.901 }, 00:06:54.901 "claimed": false, 00:06:54.901 "zoned": false, 00:06:54.901 "supported_io_types": { 00:06:54.901 "read": true, 00:06:54.901 "write": true, 00:06:54.901 "unmap": true, 00:06:54.901 "flush": false, 00:06:54.901 "reset": true, 00:06:54.901 "nvme_admin": false, 00:06:54.901 "nvme_io": false, 00:06:54.901 "nvme_io_md": false, 00:06:54.901 "write_zeroes": true, 00:06:54.901 "zcopy": false, 00:06:54.901 "get_zone_info": false, 00:06:54.901 "zone_management": false, 00:06:54.901 "zone_append": false, 00:06:54.901 "compare": false, 00:06:54.901 "compare_and_write": false, 00:06:54.901 "abort": false, 00:06:54.901 "seek_hole": true, 00:06:54.901 "seek_data": true, 00:06:54.901 "copy": false, 00:06:54.901 "nvme_iov_md": false 00:06:54.901 }, 00:06:54.901 "driver_specific": { 00:06:54.901 "lvol": { 00:06:54.901 "lvol_store_uuid": "49a23213-93da-4ff8-bb5e-4c1f399e981e", 00:06:54.901 "base_bdev": "aio_bdev", 00:06:54.901 "thin_provision": false, 00:06:54.901 "num_allocated_clusters": 38, 00:06:54.901 "snapshot": false, 00:06:54.901 "clone": false, 00:06:54.901 "esnap_clone": false 00:06:54.901 } 00:06:54.901 } 00:06:54.901 } 00:06:54.901 ] 00:06:55.159 19:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:06:55.159 19:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49a23213-93da-4ff8-bb5e-4c1f399e981e 00:06:55.159 19:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:06:55.417 19:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:06:55.417 19:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49a23213-93da-4ff8-bb5e-4c1f399e981e 00:06:55.417 19:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:06:55.675 19:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:06:55.675 19:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8ecd2bbf-04b9-42ad-9b2d-5a60797d9df8 00:06:55.933 19:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 49a23213-93da-4ff8-bb5e-4c1f399e981e 00:06:56.190 19:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:56.449 19:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:56.449 00:06:56.449 real 0m18.097s 00:06:56.449 user 0m17.603s 00:06:56.449 sys 0m1.943s 00:06:56.449 19:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.449 19:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:56.449 ************************************ 00:06:56.449 END TEST lvs_grow_clean 00:06:56.449 ************************************ 00:06:56.711 19:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:06:56.711 19:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:56.711 19:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.711 19:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:56.711 ************************************ 00:06:56.711 START TEST lvs_grow_dirty 00:06:56.711 ************************************ 00:06:56.711 19:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:06:56.711 19:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:56.711 19:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:56.711 19:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:56.711 19:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:56.711 19:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:56.711 19:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:56.711 19:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:56.711 19:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:56.711 19:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:57.020 19:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:57.020 19:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:57.293 19:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1b6a0db1-6da2-45f1-a37f-364d1efd0c8d 00:06:57.293 19:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b6a0db1-6da2-45f1-a37f-364d1efd0c8d 00:06:57.293 19:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:57.550 19:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:57.550 19:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:57.550 19:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1b6a0db1-6da2-45f1-a37f-364d1efd0c8d lvol 150 00:06:57.807 19:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1468b1cb-3d67-4499-9dc3-7ddf3f22bfb6 00:06:57.807 19:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:57.807 19:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:58.064 [2024-07-24 19:04:03.969164] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:58.064 [2024-07-24 19:04:03.969240] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:58.064 true 00:06:58.064 19:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b6a0db1-6da2-45f1-a37f-364d1efd0c8d 00:06:58.064 19:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:58.321 19:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:58.321 19:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:58.579 19:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1468b1cb-3d67-4499-9dc3-7ddf3f22bfb6 00:06:58.837 19:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:59.094 [2024-07-24 19:04:04.956213] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:59.094 19:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:59.353 19:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2492652 00:06:59.353 19:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:59.353 19:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:59.353 19:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2492652 /var/tmp/bdevperf.sock 00:06:59.353 19:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2492652 ']' 00:06:59.353 19:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:59.353 19:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.353 19:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:59.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:59.353 19:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.353 19:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:59.353 [2024-07-24 19:04:05.257322] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:06:59.353 [2024-07-24 19:04:05.257420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492652 ] 00:06:59.353 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.353 [2024-07-24 19:04:05.312161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.611 [2024-07-24 19:04:05.429639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.611 19:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.611 19:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:06:59.611 19:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:00.176 Nvme0n1 00:07:00.176 19:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:00.434 [ 00:07:00.434 { 00:07:00.434 "name": "Nvme0n1", 00:07:00.434 "aliases": [ 00:07:00.434 "1468b1cb-3d67-4499-9dc3-7ddf3f22bfb6" 00:07:00.434 ], 00:07:00.434 "product_name": "NVMe disk", 00:07:00.434 "block_size": 4096, 00:07:00.434 "num_blocks": 38912, 00:07:00.434 "uuid": "1468b1cb-3d67-4499-9dc3-7ddf3f22bfb6", 00:07:00.434 "assigned_rate_limits": { 00:07:00.434 "rw_ios_per_sec": 0, 00:07:00.434 "rw_mbytes_per_sec": 0, 00:07:00.434 "r_mbytes_per_sec": 0, 00:07:00.434 "w_mbytes_per_sec": 0 00:07:00.434 }, 00:07:00.434 "claimed": false, 00:07:00.434 "zoned": false, 00:07:00.434 "supported_io_types": { 00:07:00.434 "read": true, 00:07:00.434 "write": true, 00:07:00.434 "unmap": true, 00:07:00.434 "flush": true, 00:07:00.434 "reset": true, 00:07:00.434 "nvme_admin": true, 00:07:00.434 "nvme_io": true, 00:07:00.434 "nvme_io_md": false, 00:07:00.434 "write_zeroes": true, 00:07:00.434 "zcopy": false, 00:07:00.434 "get_zone_info": false, 00:07:00.434 "zone_management": false, 00:07:00.434 "zone_append": false, 00:07:00.434 "compare": true, 00:07:00.434 "compare_and_write": true, 00:07:00.434 "abort": true, 00:07:00.434 "seek_hole": false, 00:07:00.434 "seek_data": false, 00:07:00.434 "copy": true, 00:07:00.434 "nvme_iov_md": false 00:07:00.434 }, 00:07:00.434 "memory_domains": [ 00:07:00.434 { 00:07:00.434 "dma_device_id": "system", 00:07:00.434 "dma_device_type": 1 00:07:00.434 } 00:07:00.434 ], 00:07:00.434 "driver_specific": { 00:07:00.434 "nvme": [ 00:07:00.434 { 00:07:00.434 "trid": { 00:07:00.434 "trtype": "TCP", 00:07:00.434 "adrfam": "IPv4", 00:07:00.434 "traddr": "10.0.0.2", 00:07:00.434 "trsvcid": "4420", 00:07:00.434 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:00.434 }, 00:07:00.434 "ctrlr_data": { 00:07:00.434 "cntlid": 1, 00:07:00.434 "vendor_id": "0x8086", 00:07:00.434 "model_number": "SPDK bdev Controller", 00:07:00.434 "serial_number": "SPDK0", 00:07:00.434 "firmware_revision": "24.09", 00:07:00.434 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:00.434 "oacs": { 00:07:00.434 "security": 0, 00:07:00.434 "format": 0, 00:07:00.434 "firmware": 0, 00:07:00.434 "ns_manage": 0 00:07:00.434 }, 00:07:00.434 "multi_ctrlr": true, 00:07:00.434 "ana_reporting": false 00:07:00.434 }, 00:07:00.434 "vs": { 00:07:00.434 "nvme_version": "1.3" 00:07:00.434 }, 00:07:00.434 "ns_data": { 00:07:00.434 "id": 1, 00:07:00.434 "can_share": true 00:07:00.434 } 00:07:00.434 } 00:07:00.434 ], 00:07:00.434 "mp_policy": "active_passive" 00:07:00.434 } 00:07:00.434 } 00:07:00.434 ] 00:07:00.434 19:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2492756 00:07:00.434 19:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:00.434 19:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:00.434 Running I/O for 10 seconds... 00:07:01.807 Latency(us) 00:07:01.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:01.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.807 Nvme0n1 : 1.00 13717.00 53.58 0.00 0.00 0.00 0.00 0.00 00:07:01.807 =================================================================================================================== 00:07:01.807 Total : 13717.00 53.58 0.00 0.00 0.00 0.00 0.00 00:07:01.807 00:07:02.371 19:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1b6a0db1-6da2-45f1-a37f-364d1efd0c8d 00:07:02.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.372 Nvme0n1 : 2.00 13907.00 54.32 0.00 0.00 0.00 0.00 0.00 00:07:02.372 =================================================================================================================== 00:07:02.372 Total : 13907.00 54.32 0.00 0.00 0.00 0.00 0.00 00:07:02.372 00:07:02.630 true 00:07:02.630 19:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b6a0db1-6da2-45f1-a37f-364d1efd0c8d 00:07:02.630 19:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:02.888 19:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:02.888 19:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:02.888 19:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2492756 00:07:03.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.454 Nvme0n1 : 3.00 13970.33 54.57 0.00 0.00 0.00 0.00 0.00 00:07:03.454 =================================================================================================================== 00:07:03.454 Total : 13970.33 54.57 0.00 0.00 0.00 0.00 0.00 00:07:03.454 00:07:04.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.389 Nvme0n1 : 4.00 13970.25 54.57 0.00 0.00 0.00 0.00 0.00 00:07:04.389 =================================================================================================================== 00:07:04.389 Total : 13970.25 54.57 0.00 0.00 0.00 0.00 0.00 00:07:04.389 00:07:05.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.764 Nvme0n1 : 5.00 14033.80 54.82 0.00 0.00 0.00 0.00 0.00 00:07:05.764 =================================================================================================================== 00:07:05.764 Total : 14033.80 54.82 0.00 0.00 0.00 0.00 0.00 00:07:05.764 00:07:06.698 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.698 Nvme0n1 : 6.00 14078.83 55.00 0.00 0.00 0.00 0.00 0.00 00:07:06.698 =================================================================================================================== 00:07:06.698 Total : 14078.83 55.00 0.00 0.00 0.00 0.00 0.00 00:07:06.698 00:07:07.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.632 Nvme0n1 : 7.00 14117.71 55.15 0.00 0.00 0.00 0.00 0.00 00:07:07.632 =================================================================================================================== 00:07:07.632 Total : 14117.71 55.15 0.00 0.00 0.00 0.00 0.00 00:07:07.632 00:07:08.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.567 Nvme0n1 : 8.00 14146.88 55.26 0.00 0.00 0.00 0.00 0.00 00:07:08.567 =================================================================================================================== 00:07:08.567 Total : 14146.88 55.26 0.00 0.00 0.00 0.00 0.00 00:07:08.567 00:07:09.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.500 Nvme0n1 : 9.00 14183.67 55.40 0.00 0.00 0.00 0.00 0.00 00:07:09.500 =================================================================================================================== 00:07:09.500 Total : 14183.67 55.40 0.00 0.00 0.00 0.00 0.00 00:07:09.500 00:07:10.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.435 Nvme0n1 : 10.00 14200.40 55.47 0.00 0.00 0.00 0.00 0.00 00:07:10.435 =================================================================================================================== 00:07:10.435 Total : 14200.40 55.47 0.00 0.00 0.00 0.00 0.00 00:07:10.435 00:07:10.435 00:07:10.435 Latency(us) 00:07:10.435 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.435 Nvme0n1 : 10.01 14204.47 55.49 0.00 0.00 9005.53 4927.34 21068.61 00:07:10.435 =================================================================================================================== 00:07:10.435 Total : 14204.47 55.49 0.00 0.00 9005.53 4927.34 21068.61 00:07:10.435 0 00:07:10.435 19:04:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2492652 00:07:10.435 19:04:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2492652 ']' 00:07:10.435 19:04:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2492652 00:07:10.435 19:04:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:07:10.435 19:04:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.435 19:04:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2492652 00:07:10.693 19:04:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:10.693 19:04:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:10.693 19:04:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2492652' 00:07:10.693 killing process with pid 2492652 00:07:10.693 19:04:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2492652 00:07:10.693 Received shutdown signal, test time was about 10.000000 seconds 00:07:10.693 00:07:10.693 Latency(us) 00:07:10.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.693 =================================================================================================================== 00:07:10.693 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:10.693 19:04:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2492652 00:07:10.693 19:04:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:10.951 19:04:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:11.517 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:11.517 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b6a0db1-6da2-45f1-a37f-364d1efd0c8d 00:07:11.776 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:11.776 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:11.776 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2490546 00:07:11.776 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2490546 00:07:11.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2490546 Killed "${NVMF_APP[@]}" "$@" 00:07:11.776 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:11.776 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:11.776 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:11.776 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:11.776 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:11.776 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2493783 00:07:11.776 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:11.776 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2493783 00:07:11.776 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2493783 ']' 00:07:11.776 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.776 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.776 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.776 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.776 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:11.776 [2024-07-24 19:04:17.666476] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:07:11.776 [2024-07-24 19:04:17.666582] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.776 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.776 [2024-07-24 19:04:17.733246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.035 [2024-07-24 19:04:17.849054] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:12.035 [2024-07-24 19:04:17.849122] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:12.035 [2024-07-24 19:04:17.849138] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:12.035 [2024-07-24 19:04:17.849151] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:12.035 [2024-07-24 19:04:17.849163] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:12.035 [2024-07-24 19:04:17.849193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.035 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.035 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:12.035 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:12.035 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:12.035 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:12.035 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:12.035 19:04:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:12.293 [2024-07-24 19:04:18.288579] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:12.293 [2024-07-24 19:04:18.288720] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:12.293 [2024-07-24 19:04:18.288775] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:12.551 19:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:12.551 19:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1468b1cb-3d67-4499-9dc3-7ddf3f22bfb6 00:07:12.551 19:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=1468b1cb-3d67-4499-9dc3-7ddf3f22bfb6 00:07:12.551 19:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:12.551 19:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:12.551 19:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:12.551 19:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:12.551 19:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:12.808 19:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1468b1cb-3d67-4499-9dc3-7ddf3f22bfb6 -t 2000 00:07:13.066 [ 00:07:13.066 { 00:07:13.066 "name": "1468b1cb-3d67-4499-9dc3-7ddf3f22bfb6", 00:07:13.066 "aliases": [ 00:07:13.066 "lvs/lvol" 00:07:13.066 ], 00:07:13.066 "product_name": "Logical Volume", 00:07:13.066 "block_size": 4096, 00:07:13.066 "num_blocks": 38912, 00:07:13.066 "uuid": "1468b1cb-3d67-4499-9dc3-7ddf3f22bfb6", 00:07:13.066 "assigned_rate_limits": { 00:07:13.066 "rw_ios_per_sec": 0, 00:07:13.066 "rw_mbytes_per_sec": 0, 00:07:13.066 "r_mbytes_per_sec": 0, 00:07:13.066 "w_mbytes_per_sec": 0 00:07:13.066 }, 00:07:13.066 "claimed": false, 00:07:13.066 "zoned": false, 00:07:13.066 "supported_io_types": { 00:07:13.066 "read": true, 00:07:13.066 "write": true, 00:07:13.066 "unmap": true, 00:07:13.066 "flush": false, 00:07:13.066 "reset": true, 00:07:13.066 "nvme_admin": false, 00:07:13.066 "nvme_io": false, 00:07:13.066 "nvme_io_md": false, 00:07:13.066 "write_zeroes": true, 00:07:13.066 "zcopy": false, 00:07:13.066 "get_zone_info": false, 00:07:13.066 "zone_management": false, 00:07:13.066 "zone_append": false, 00:07:13.066 "compare": false, 00:07:13.066 "compare_and_write": false, 00:07:13.066 "abort": false, 00:07:13.066 "seek_hole": true, 00:07:13.066 "seek_data": true, 00:07:13.066 "copy": false, 00:07:13.066 "nvme_iov_md": false 00:07:13.066 }, 00:07:13.066 "driver_specific": { 00:07:13.066 "lvol": { 00:07:13.066 "lvol_store_uuid": "1b6a0db1-6da2-45f1-a37f-364d1efd0c8d", 00:07:13.066 "base_bdev": "aio_bdev", 00:07:13.066 "thin_provision": false, 00:07:13.066 "num_allocated_clusters": 38, 00:07:13.066 "snapshot": false, 00:07:13.066 "clone": false, 00:07:13.066 "esnap_clone": false 00:07:13.066 } 00:07:13.066 } 00:07:13.066 } 00:07:13.066 ] 00:07:13.066 19:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:13.066 19:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:13.066 19:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b6a0db1-6da2-45f1-a37f-364d1efd0c8d 00:07:13.323 19:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:13.323 19:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b6a0db1-6da2-45f1-a37f-364d1efd0c8d 00:07:13.323 19:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:13.581 19:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:13.581 19:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:13.838 [2024-07-24 19:04:19.665881] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:13.838 19:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b6a0db1-6da2-45f1-a37f-364d1efd0c8d 00:07:13.838 19:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:13.838 19:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b6a0db1-6da2-45f1-a37f-364d1efd0c8d 00:07:13.838 19:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:13.838 19:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.838 19:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:13.838 19:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.838 19:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:13.838 19:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.838 19:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:13.838 19:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:13.838 19:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b6a0db1-6da2-45f1-a37f-364d1efd0c8d 00:07:14.096 request: 00:07:14.096 { 00:07:14.096 "uuid": "1b6a0db1-6da2-45f1-a37f-364d1efd0c8d", 00:07:14.096 "method": "bdev_lvol_get_lvstores", 00:07:14.096 "req_id": 1 00:07:14.096 } 00:07:14.096 Got JSON-RPC error response 00:07:14.096 response: 00:07:14.096 { 00:07:14.096 "code": -19, 00:07:14.096 "message": "No such device" 00:07:14.096 } 00:07:14.096 19:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:14.096 19:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:14.096 19:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:14.096 19:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:14.096 19:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:14.353 aio_bdev 00:07:14.353 19:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1468b1cb-3d67-4499-9dc3-7ddf3f22bfb6 00:07:14.353 19:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=1468b1cb-3d67-4499-9dc3-7ddf3f22bfb6 00:07:14.353 19:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:14.353 19:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:14.353 19:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:14.353 19:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:14.353 19:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:14.610 19:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1468b1cb-3d67-4499-9dc3-7ddf3f22bfb6 -t 2000 00:07:14.867 [ 00:07:14.867 { 00:07:14.867 "name": "1468b1cb-3d67-4499-9dc3-7ddf3f22bfb6", 00:07:14.867 "aliases": [ 00:07:14.867 "lvs/lvol" 00:07:14.867 ], 00:07:14.867 "product_name": "Logical Volume", 00:07:14.867 "block_size": 4096, 00:07:14.867 "num_blocks": 38912, 00:07:14.867 "uuid": "1468b1cb-3d67-4499-9dc3-7ddf3f22bfb6", 00:07:14.867 "assigned_rate_limits": { 00:07:14.867 "rw_ios_per_sec": 0, 00:07:14.867 "rw_mbytes_per_sec": 0, 00:07:14.867 "r_mbytes_per_sec": 0, 00:07:14.867 "w_mbytes_per_sec": 0 00:07:14.867 }, 00:07:14.867 "claimed": false, 00:07:14.867 "zoned": false, 00:07:14.867 "supported_io_types": { 00:07:14.867 "read": true, 00:07:14.867 "write": true, 00:07:14.867 "unmap": true, 00:07:14.867 "flush": false, 00:07:14.867 "reset": true, 00:07:14.867 "nvme_admin": false, 00:07:14.867 "nvme_io": false, 00:07:14.867 "nvme_io_md": false, 00:07:14.867 "write_zeroes": true, 00:07:14.867 "zcopy": false, 00:07:14.867 "get_zone_info": false, 00:07:14.867 "zone_management": false, 00:07:14.867 "zone_append": false, 00:07:14.867 "compare": false, 00:07:14.867 "compare_and_write": false, 00:07:14.867 "abort": false, 00:07:14.867 "seek_hole": true, 00:07:14.867 "seek_data": true, 00:07:14.867 "copy": false, 00:07:14.867 "nvme_iov_md": false 00:07:14.867 }, 00:07:14.867 "driver_specific": { 00:07:14.867 "lvol": { 00:07:14.867 "lvol_store_uuid": "1b6a0db1-6da2-45f1-a37f-364d1efd0c8d", 00:07:14.867 "base_bdev": "aio_bdev", 00:07:14.867 "thin_provision": false, 00:07:14.867 "num_allocated_clusters": 38, 00:07:14.867 "snapshot": false, 00:07:14.867 "clone": false, 00:07:14.867 "esnap_clone": false 00:07:14.867 } 00:07:14.867 } 00:07:14.867 } 00:07:14.867 ] 00:07:14.867 19:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:14.867 19:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b6a0db1-6da2-45f1-a37f-364d1efd0c8d 00:07:14.867 19:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:15.124 19:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:15.124 19:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b6a0db1-6da2-45f1-a37f-364d1efd0c8d 00:07:15.124 19:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:15.382 19:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:15.382 19:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1468b1cb-3d67-4499-9dc3-7ddf3f22bfb6 00:07:15.640 19:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1b6a0db1-6da2-45f1-a37f-364d1efd0c8d 00:07:15.898 19:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:16.156 19:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:16.156 00:07:16.156 real 0m19.449s 00:07:16.156 user 0m49.907s 00:07:16.156 sys 0m4.448s 00:07:16.156 19:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.156 19:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:16.156 ************************************ 00:07:16.156 END TEST lvs_grow_dirty 00:07:16.156 ************************************ 00:07:16.157 19:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:16.157 19:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:07:16.157 19:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:07:16.157 19:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:07:16.157 19:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:16.157 19:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:07:16.157 19:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:07:16.157 19:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:07:16.157 19:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:16.157 nvmf_trace.0 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:16.157 rmmod nvme_tcp 00:07:16.157 rmmod nvme_fabrics 00:07:16.157 rmmod nvme_keyring 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2493783 ']' 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2493783 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2493783 ']' 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2493783 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2493783 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2493783' 00:07:16.157 killing process with pid 2493783 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2493783 00:07:16.157 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2493783 00:07:16.416 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:16.416 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:16.416 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:16.416 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:16.416 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:16.416 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.416 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.416 19:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:18.953 00:07:18.953 real 0m42.496s 00:07:18.953 user 1m13.215s 00:07:18.953 sys 0m7.989s 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:18.953 ************************************ 00:07:18.953 END TEST nvmf_lvs_grow 00:07:18.953 ************************************ 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:18.953 ************************************ 00:07:18.953 START TEST nvmf_bdev_io_wait 00:07:18.953 ************************************ 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:18.953 * Looking for test storage... 00:07:18.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:18.953 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:18.954 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:07:18.954 19:04:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:20.334 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:20.334 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:20.334 Found net devices under 0000:08:00.0: cvl_0_0 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:20.334 Found net devices under 0000:08:00.1: cvl_0_1 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:20.334 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:20.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:20.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:07:20.335 00:07:20.335 --- 10.0.0.2 ping statistics --- 00:07:20.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.335 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:20.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:20.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:07:20.335 00:07:20.335 --- 10.0.0.1 ping statistics --- 00:07:20.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.335 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2495738 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2495738 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2495738 ']' 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.335 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:20.593 [2024-07-24 19:04:26.362157] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:07:20.593 [2024-07-24 19:04:26.362253] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.593 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.593 [2024-07-24 19:04:26.433095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:20.593 [2024-07-24 19:04:26.555655] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:20.593 [2024-07-24 19:04:26.555720] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:20.593 [2024-07-24 19:04:26.555737] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:20.593 [2024-07-24 19:04:26.555751] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:20.593 [2024-07-24 19:04:26.555762] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:20.593 [2024-07-24 19:04:26.555850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.593 [2024-07-24 19:04:26.555939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.593 [2024-07-24 19:04:26.555992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.593 [2024-07-24 19:04:26.555995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.851 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.851 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:07:20.851 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:20.851 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:20.851 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:20.851 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.851 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:20.851 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.851 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:20.851 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.851 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:20.851 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.851 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:20.851 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.851 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:20.851 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.851 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:20.851 [2024-07-24 19:04:26.712466] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.851 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.851 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:20.852 Malloc0 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:20.852 [2024-07-24 19:04:26.772490] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2495775 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2495777 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:20.852 { 00:07:20.852 "params": { 00:07:20.852 "name": "Nvme$subsystem", 00:07:20.852 "trtype": "$TEST_TRANSPORT", 00:07:20.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:20.852 "adrfam": "ipv4", 00:07:20.852 "trsvcid": "$NVMF_PORT", 00:07:20.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:20.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:20.852 "hdgst": ${hdgst:-false}, 00:07:20.852 "ddgst": ${ddgst:-false} 00:07:20.852 }, 00:07:20.852 "method": "bdev_nvme_attach_controller" 00:07:20.852 } 00:07:20.852 EOF 00:07:20.852 )") 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2495779 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:20.852 { 00:07:20.852 "params": { 00:07:20.852 "name": "Nvme$subsystem", 00:07:20.852 "trtype": "$TEST_TRANSPORT", 00:07:20.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:20.852 "adrfam": "ipv4", 00:07:20.852 "trsvcid": "$NVMF_PORT", 00:07:20.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:20.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:20.852 "hdgst": ${hdgst:-false}, 00:07:20.852 "ddgst": ${ddgst:-false} 00:07:20.852 }, 00:07:20.852 "method": "bdev_nvme_attach_controller" 00:07:20.852 } 00:07:20.852 EOF 00:07:20.852 )") 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2495781 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:20.852 { 00:07:20.852 "params": { 00:07:20.852 "name": "Nvme$subsystem", 00:07:20.852 "trtype": "$TEST_TRANSPORT", 00:07:20.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:20.852 "adrfam": "ipv4", 00:07:20.852 "trsvcid": "$NVMF_PORT", 00:07:20.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:20.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:20.852 "hdgst": ${hdgst:-false}, 00:07:20.852 "ddgst": ${ddgst:-false} 00:07:20.852 }, 00:07:20.852 "method": "bdev_nvme_attach_controller" 00:07:20.852 } 00:07:20.852 EOF 00:07:20.852 )") 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:20.852 { 00:07:20.852 "params": { 00:07:20.852 "name": "Nvme$subsystem", 00:07:20.852 "trtype": "$TEST_TRANSPORT", 00:07:20.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:20.852 "adrfam": "ipv4", 00:07:20.852 "trsvcid": "$NVMF_PORT", 00:07:20.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:20.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:20.852 "hdgst": ${hdgst:-false}, 00:07:20.852 "ddgst": ${ddgst:-false} 00:07:20.852 }, 00:07:20.852 "method": "bdev_nvme_attach_controller" 00:07:20.852 } 00:07:20.852 EOF 00:07:20.852 )") 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2495775 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:20.852 "params": { 00:07:20.852 "name": "Nvme1", 00:07:20.852 "trtype": "tcp", 00:07:20.852 "traddr": "10.0.0.2", 00:07:20.852 "adrfam": "ipv4", 00:07:20.852 "trsvcid": "4420", 00:07:20.852 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:20.852 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:20.852 "hdgst": false, 00:07:20.852 "ddgst": false 00:07:20.852 }, 00:07:20.852 "method": "bdev_nvme_attach_controller" 00:07:20.852 }' 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:20.852 "params": { 00:07:20.852 "name": "Nvme1", 00:07:20.852 "trtype": "tcp", 00:07:20.852 "traddr": "10.0.0.2", 00:07:20.852 "adrfam": "ipv4", 00:07:20.852 "trsvcid": "4420", 00:07:20.852 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:20.852 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:20.852 "hdgst": false, 00:07:20.852 "ddgst": false 00:07:20.852 }, 00:07:20.852 "method": "bdev_nvme_attach_controller" 00:07:20.852 }' 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:20.852 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:20.852 "params": { 00:07:20.852 "name": "Nvme1", 00:07:20.852 "trtype": "tcp", 00:07:20.852 "traddr": "10.0.0.2", 00:07:20.852 "adrfam": "ipv4", 00:07:20.852 "trsvcid": "4420", 00:07:20.852 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:20.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:20.853 "hdgst": false, 00:07:20.853 "ddgst": false 00:07:20.853 }, 00:07:20.853 "method": "bdev_nvme_attach_controller" 00:07:20.853 }' 00:07:20.853 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:20.853 19:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:20.853 "params": { 00:07:20.853 "name": "Nvme1", 00:07:20.853 "trtype": "tcp", 00:07:20.853 "traddr": "10.0.0.2", 00:07:20.853 "adrfam": "ipv4", 00:07:20.853 "trsvcid": "4420", 00:07:20.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:20.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:20.853 "hdgst": false, 00:07:20.853 "ddgst": false 00:07:20.853 }, 00:07:20.853 "method": "bdev_nvme_attach_controller" 00:07:20.853 }' 00:07:20.853 [2024-07-24 19:04:26.824238] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:07:20.853 [2024-07-24 19:04:26.824238] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:07:20.853 [2024-07-24 19:04:26.824238] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:07:20.853 [2024-07-24 19:04:26.824336] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 19:04:26.824335] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 19:04:26.824336] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:20.853 --proc-type=auto ] 00:07:20.853 --proc-type=auto ] 00:07:20.853 [2024-07-24 19:04:26.826036] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:07:20.853 [2024-07-24 19:04:26.826125] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:21.111 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.111 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.111 [2024-07-24 19:04:26.967755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.111 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.111 [2024-07-24 19:04:27.038561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.111 [2024-07-24 19:04:27.063703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:07:21.111 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.111 [2024-07-24 19:04:27.107707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.372 [2024-07-24 19:04:27.136051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:07:21.372 [2024-07-24 19:04:27.173153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.372 [2024-07-24 19:04:27.205549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:21.372 [2024-07-24 19:04:27.268665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:07:21.372 Running I/O for 1 seconds... 00:07:21.372 Running I/O for 1 seconds... 00:07:21.632 Running I/O for 1 seconds... 00:07:21.632 Running I/O for 1 seconds... 00:07:22.568 00:07:22.568 Latency(us) 00:07:22.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.568 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:22.568 Nvme1n1 : 1.01 9574.59 37.40 0.00 0.00 13303.03 8883.77 21359.88 00:07:22.568 =================================================================================================================== 00:07:22.568 Total : 9574.59 37.40 0.00 0.00 13303.03 8883.77 21359.88 00:07:22.568 00:07:22.568 Latency(us) 00:07:22.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.568 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:22.568 Nvme1n1 : 1.00 121804.07 475.80 0.00 0.00 1046.66 394.43 1365.33 00:07:22.568 =================================================================================================================== 00:07:22.568 Total : 121804.07 475.80 0.00 0.00 1046.66 394.43 1365.33 00:07:22.568 00:07:22.568 Latency(us) 00:07:22.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.568 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:22.568 Nvme1n1 : 1.01 7991.33 31.22 0.00 0.00 15930.18 8252.68 26020.22 00:07:22.568 =================================================================================================================== 00:07:22.568 Total : 7991.33 31.22 0.00 0.00 15930.18 8252.68 26020.22 00:07:22.568 00:07:22.568 Latency(us) 00:07:22.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.568 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:22.568 Nvme1n1 : 1.01 8526.71 33.31 0.00 0.00 14948.16 7427.41 28156.21 00:07:22.568 =================================================================================================================== 00:07:22.568 Total : 8526.71 33.31 0.00 0.00 14948.16 7427.41 28156.21 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2495777 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2495779 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2495781 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:22.827 rmmod nvme_tcp 00:07:22.827 rmmod nvme_fabrics 00:07:22.827 rmmod nvme_keyring 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2495738 ']' 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2495738 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2495738 ']' 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2495738 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2495738 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2495738' 00:07:22.827 killing process with pid 2495738 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2495738 00:07:22.827 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2495738 00:07:23.110 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:23.110 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:23.110 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:23.110 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:23.110 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:23.110 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.110 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:23.110 19:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.028 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:25.028 00:07:25.028 real 0m6.607s 00:07:25.028 user 0m14.395s 00:07:25.028 sys 0m3.522s 00:07:25.028 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.028 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.028 ************************************ 00:07:25.028 END TEST nvmf_bdev_io_wait 00:07:25.028 ************************************ 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:25.287 ************************************ 00:07:25.287 START TEST nvmf_queue_depth 00:07:25.287 ************************************ 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:25.287 * Looking for test storage... 00:07:25.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:25.287 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:25.288 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:25.288 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:25.288 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:25.288 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:25.288 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:25.288 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.288 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:25.288 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:25.288 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:25.288 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.288 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.288 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.288 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:25.288 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:25.288 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:07:25.288 19:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:27.193 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:27.193 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:27.193 Found net devices under 0000:08:00.0: cvl_0_0 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:27.193 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:27.194 Found net devices under 0000:08:00.1: cvl_0_1 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:27.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:27.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:07:27.194 00:07:27.194 --- 10.0.0.2 ping statistics --- 00:07:27.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.194 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:27.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:27.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:07:27.194 00:07:27.194 --- 10.0.0.1 ping statistics --- 00:07:27.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.194 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2497495 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2497495 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2497495 ']' 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.194 19:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.194 [2024-07-24 19:04:32.991642] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:07:27.194 [2024-07-24 19:04:32.991739] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.194 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.194 [2024-07-24 19:04:33.060355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.194 [2024-07-24 19:04:33.179089] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.194 [2024-07-24 19:04:33.179154] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.194 [2024-07-24 19:04:33.179170] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.194 [2024-07-24 19:04:33.179183] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.194 [2024-07-24 19:04:33.179194] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.194 [2024-07-24 19:04:33.179226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.453 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.454 [2024-07-24 19:04:33.319576] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.454 Malloc0 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.454 [2024-07-24 19:04:33.382382] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2497517 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2497517 /var/tmp/bdevperf.sock 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2497517 ']' 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:27.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.454 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.454 [2024-07-24 19:04:33.434977] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:07:27.454 [2024-07-24 19:04:33.435074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497517 ] 00:07:27.454 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.713 [2024-07-24 19:04:33.497001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.713 [2024-07-24 19:04:33.614558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.713 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.713 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:27.713 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:27.713 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.713 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.973 NVMe0n1 00:07:27.973 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.973 19:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:27.973 Running I/O for 10 seconds... 00:07:40.186 00:07:40.186 Latency(us) 00:07:40.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.186 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:40.186 Verification LBA range: start 0x0 length 0x4000 00:07:40.186 NVMe0n1 : 10.10 7248.00 28.31 0.00 0.00 140408.28 28350.39 82721.00 00:07:40.186 =================================================================================================================== 00:07:40.186 Total : 7248.00 28.31 0.00 0.00 140408.28 28350.39 82721.00 00:07:40.186 0 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2497517 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2497517 ']' 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2497517 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2497517 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2497517' 00:07:40.186 killing process with pid 2497517 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2497517 00:07:40.186 Received shutdown signal, test time was about 10.000000 seconds 00:07:40.186 00:07:40.186 Latency(us) 00:07:40.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.186 =================================================================================================================== 00:07:40.186 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2497517 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:40.186 rmmod nvme_tcp 00:07:40.186 rmmod nvme_fabrics 00:07:40.186 rmmod nvme_keyring 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2497495 ']' 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2497495 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2497495 ']' 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2497495 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2497495 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2497495' 00:07:40.186 killing process with pid 2497495 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2497495 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2497495 00:07:40.186 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:40.187 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:40.187 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:40.187 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:40.187 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:40.187 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.187 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.187 19:04:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.755 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:40.755 00:07:40.755 real 0m15.614s 00:07:40.755 user 0m20.824s 00:07:40.755 sys 0m3.450s 00:07:40.755 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.755 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:40.755 ************************************ 00:07:40.755 END TEST nvmf_queue_depth 00:07:40.755 ************************************ 00:07:40.755 19:04:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:40.755 19:04:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:40.755 19:04:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.755 19:04:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:40.755 ************************************ 00:07:40.755 START TEST nvmf_target_multipath 00:07:40.755 ************************************ 00:07:40.755 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:41.014 * Looking for test storage... 00:07:41.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.014 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.014 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:41.014 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.014 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.014 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.014 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.014 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.014 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.014 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.014 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.014 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.014 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.014 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:41.014 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:41.014 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.014 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.014 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.014 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.014 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.014 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.014 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.014 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:07:41.015 19:04:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:42.922 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:42.922 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:42.922 Found net devices under 0000:08:00.0: cvl_0_0 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:42.922 Found net devices under 0000:08:00.1: cvl_0_1 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:42.922 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:42.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:07:42.923 00:07:42.923 --- 10.0.0.2 ping statistics --- 00:07:42.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.923 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:42.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:07:42.923 00:07:42.923 --- 10.0.0.1 ping statistics --- 00:07:42.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.923 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:07:42.923 only one NIC for nvmf test 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:42.923 rmmod nvme_tcp 00:07:42.923 rmmod nvme_fabrics 00:07:42.923 rmmod nvme_keyring 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.923 19:04:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:44.832 00:07:44.832 real 0m4.060s 00:07:44.832 user 0m0.729s 00:07:44.832 sys 0m1.315s 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:44.832 ************************************ 00:07:44.832 END TEST nvmf_target_multipath 00:07:44.832 ************************************ 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.832 19:04:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:45.091 ************************************ 00:07:45.091 START TEST nvmf_zcopy 00:07:45.091 ************************************ 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:45.091 * Looking for test storage... 00:07:45.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:07:45.091 19:04:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:46.996 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:46.996 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:46.996 Found net devices under 0000:08:00.0: cvl_0_0 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:46.996 Found net devices under 0000:08:00.1: cvl_0_1 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:07:46.996 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:46.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:07:46.997 00:07:46.997 --- 10.0.0.2 ping statistics --- 00:07:46.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.997 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:46.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:07:46.997 00:07:46.997 --- 10.0.0.1 ping statistics --- 00:07:46.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.997 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2501507 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2501507 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2501507 ']' 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.997 19:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:46.997 [2024-07-24 19:04:52.750323] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:07:46.997 [2024-07-24 19:04:52.750420] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.997 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.997 [2024-07-24 19:04:52.819396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.997 [2024-07-24 19:04:52.938020] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.997 [2024-07-24 19:04:52.938090] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.997 [2024-07-24 19:04:52.938105] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.997 [2024-07-24 19:04:52.938118] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.997 [2024-07-24 19:04:52.938130] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.997 [2024-07-24 19:04:52.938161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:47.255 [2024-07-24 19:04:53.078909] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:47.255 [2024-07-24 19:04:53.095079] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:47.255 malloc0 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:47.255 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:47.255 { 00:07:47.255 "params": { 00:07:47.255 "name": "Nvme$subsystem", 00:07:47.255 "trtype": "$TEST_TRANSPORT", 00:07:47.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:47.256 "adrfam": "ipv4", 00:07:47.256 "trsvcid": "$NVMF_PORT", 00:07:47.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:47.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:47.256 "hdgst": ${hdgst:-false}, 00:07:47.256 "ddgst": ${ddgst:-false} 00:07:47.256 }, 00:07:47.256 "method": "bdev_nvme_attach_controller" 00:07:47.256 } 00:07:47.256 EOF 00:07:47.256 )") 00:07:47.256 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:07:47.256 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:07:47.256 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:07:47.256 19:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:47.256 "params": { 00:07:47.256 "name": "Nvme1", 00:07:47.256 "trtype": "tcp", 00:07:47.256 "traddr": "10.0.0.2", 00:07:47.256 "adrfam": "ipv4", 00:07:47.256 "trsvcid": "4420", 00:07:47.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:47.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:47.256 "hdgst": false, 00:07:47.256 "ddgst": false 00:07:47.256 }, 00:07:47.256 "method": "bdev_nvme_attach_controller" 00:07:47.256 }' 00:07:47.256 [2024-07-24 19:04:53.185765] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:07:47.256 [2024-07-24 19:04:53.185857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2501533 ] 00:07:47.256 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.256 [2024-07-24 19:04:53.247511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.515 [2024-07-24 19:04:53.367327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.773 Running I/O for 10 seconds... 00:07:57.736 00:07:57.736 Latency(us) 00:07:57.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.736 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:07:57.736 Verification LBA range: start 0x0 length 0x1000 00:07:57.736 Nvme1n1 : 10.02 4897.63 38.26 0.00 0.00 26059.44 3762.25 35535.08 00:07:57.736 =================================================================================================================== 00:07:57.736 Total : 4897.63 38.26 0.00 0.00 26059.44 3762.25 35535.08 00:07:57.993 19:05:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2502532 00:07:57.993 19:05:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:07:57.993 19:05:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:57.993 19:05:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:07:57.993 19:05:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:07:57.993 19:05:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:07:57.993 19:05:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:07:57.993 19:05:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:57.993 19:05:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:57.993 { 00:07:57.993 "params": { 00:07:57.993 "name": "Nvme$subsystem", 00:07:57.993 "trtype": "$TEST_TRANSPORT", 00:07:57.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:57.993 "adrfam": "ipv4", 00:07:57.993 "trsvcid": "$NVMF_PORT", 00:07:57.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:57.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:57.993 "hdgst": ${hdgst:-false}, 00:07:57.993 "ddgst": ${ddgst:-false} 00:07:57.993 }, 00:07:57.993 "method": "bdev_nvme_attach_controller" 00:07:57.993 } 00:07:57.993 EOF 00:07:57.993 )") 00:07:57.993 19:05:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:07:57.993 [2024-07-24 19:05:03.931318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.993 [2024-07-24 19:05:03.931361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.993 19:05:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:07:57.993 19:05:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:07:57.993 19:05:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:57.993 "params": { 00:07:57.993 "name": "Nvme1", 00:07:57.993 "trtype": "tcp", 00:07:57.993 "traddr": "10.0.0.2", 00:07:57.993 "adrfam": "ipv4", 00:07:57.993 "trsvcid": "4420", 00:07:57.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:57.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:57.993 "hdgst": false, 00:07:57.993 "ddgst": false 00:07:57.993 }, 00:07:57.993 "method": "bdev_nvme_attach_controller" 00:07:57.993 }' 00:07:57.993 [2024-07-24 19:05:03.939274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.993 [2024-07-24 19:05:03.939301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.993 [2024-07-24 19:05:03.947295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.993 [2024-07-24 19:05:03.947318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.993 [2024-07-24 19:05:03.955319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.993 [2024-07-24 19:05:03.955342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.993 [2024-07-24 19:05:03.963341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.993 [2024-07-24 19:05:03.963363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.993 [2024-07-24 19:05:03.971362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.993 [2024-07-24 19:05:03.971384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.993 [2024-07-24 19:05:03.972070] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:07:57.994 [2024-07-24 19:05:03.972161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2502532 ] 00:07:57.994 [2024-07-24 19:05:03.979384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.994 [2024-07-24 19:05:03.979407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.994 [2024-07-24 19:05:03.987405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.994 [2024-07-24 19:05:03.987427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.994 [2024-07-24 19:05:03.995428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.994 [2024-07-24 19:05:03.995450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.994 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.994 [2024-07-24 19:05:04.003452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.994 [2024-07-24 19:05:04.003474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.256 [2024-07-24 19:05:04.011497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.256 [2024-07-24 19:05:04.011525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.256 [2024-07-24 19:05:04.019506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.256 [2024-07-24 19:05:04.019539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.256 [2024-07-24 19:05:04.027550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.256 [2024-07-24 19:05:04.027573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.256 [2024-07-24 19:05:04.033779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.256 [2024-07-24 19:05:04.035563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.256 [2024-07-24 19:05:04.035594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.256 [2024-07-24 19:05:04.043650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.256 [2024-07-24 19:05:04.043701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.256 [2024-07-24 19:05:04.051644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.256 [2024-07-24 19:05:04.051686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.256 [2024-07-24 19:05:04.059612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.256 [2024-07-24 19:05:04.059636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.256 [2024-07-24 19:05:04.067662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.256 [2024-07-24 19:05:04.067692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.256 [2024-07-24 19:05:04.075658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.256 [2024-07-24 19:05:04.075683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.256 [2024-07-24 19:05:04.083690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.256 [2024-07-24 19:05:04.083734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.256 [2024-07-24 19:05:04.091706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.256 [2024-07-24 19:05:04.091730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.256 [2024-07-24 19:05:04.099800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.256 [2024-07-24 19:05:04.099852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.256 [2024-07-24 19:05:04.107801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.256 [2024-07-24 19:05:04.107849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.256 [2024-07-24 19:05:04.115768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.256 [2024-07-24 19:05:04.115790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.256 [2024-07-24 19:05:04.123813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.257 [2024-07-24 19:05:04.123844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.257 [2024-07-24 19:05:04.131818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.257 [2024-07-24 19:05:04.131842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.257 [2024-07-24 19:05:04.139853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.257 [2024-07-24 19:05:04.139882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.257 [2024-07-24 19:05:04.147861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.257 [2024-07-24 19:05:04.147885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.257 [2024-07-24 19:05:04.154161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.257 [2024-07-24 19:05:04.155884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.257 [2024-07-24 19:05:04.155908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.257 [2024-07-24 19:05:04.163901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.257 [2024-07-24 19:05:04.163923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.257 [2024-07-24 19:05:04.172000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.257 [2024-07-24 19:05:04.172052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.257 [2024-07-24 19:05:04.180022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.257 [2024-07-24 19:05:04.180073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.257 [2024-07-24 19:05:04.188042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.257 [2024-07-24 19:05:04.188093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.257 [2024-07-24 19:05:04.196069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.257 [2024-07-24 19:05:04.196118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.257 [2024-07-24 19:05:04.204092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.257 [2024-07-24 19:05:04.204144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.257 [2024-07-24 19:05:04.212112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.257 [2024-07-24 19:05:04.212160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.257 [2024-07-24 19:05:04.220132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.257 [2024-07-24 19:05:04.220182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.257 [2024-07-24 19:05:04.228157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.257 [2024-07-24 19:05:04.228225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.257 [2024-07-24 19:05:04.236160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.257 [2024-07-24 19:05:04.236210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.257 [2024-07-24 19:05:04.244124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.257 [2024-07-24 19:05:04.244146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.257 [2024-07-24 19:05:04.252152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.257 [2024-07-24 19:05:04.252176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.257 [2024-07-24 19:05:04.260196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.257 [2024-07-24 19:05:04.260225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.257 [2024-07-24 19:05:04.268210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.257 [2024-07-24 19:05:04.268238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.276232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.276259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.284262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.284288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.292270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.292293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.300296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.300319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.308316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.308338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.316333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.316355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.324368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.324392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.332394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.332419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.340411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.340437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.348429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.348452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.356457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.356488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.364488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.364511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.372517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.372539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.380551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.380593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.388561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.388585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.396574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.396596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.404598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.404620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.412622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.412645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.420647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.420671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.428671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.428696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.436695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.436717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.444718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.444740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.452740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.515 [2024-07-24 19:05:04.452762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.515 [2024-07-24 19:05:04.460762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.516 [2024-07-24 19:05:04.460784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.516 [2024-07-24 19:05:04.468788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.516 [2024-07-24 19:05:04.468812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.516 [2024-07-24 19:05:04.517722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.516 [2024-07-24 19:05:04.517752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.516 [2024-07-24 19:05:04.524962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.516 [2024-07-24 19:05:04.524988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.773 Running I/O for 5 seconds... 00:07:58.773 [2024-07-24 19:05:04.532989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.773 [2024-07-24 19:05:04.533017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.773 [2024-07-24 19:05:04.549070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.773 [2024-07-24 19:05:04.549105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.773 [2024-07-24 19:05:04.561759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.773 [2024-07-24 19:05:04.561790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.773 [2024-07-24 19:05:04.574882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.773 [2024-07-24 19:05:04.574912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.773 [2024-07-24 19:05:04.588078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.773 [2024-07-24 19:05:04.588109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.773 [2024-07-24 19:05:04.600975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.773 [2024-07-24 19:05:04.601015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.773 [2024-07-24 19:05:04.614326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.773 [2024-07-24 19:05:04.614358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.773 [2024-07-24 19:05:04.627136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.773 [2024-07-24 19:05:04.627166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.773 [2024-07-24 19:05:04.639723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.773 [2024-07-24 19:05:04.639753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.774 [2024-07-24 19:05:04.652218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.774 [2024-07-24 19:05:04.652249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.774 [2024-07-24 19:05:04.665058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.774 [2024-07-24 19:05:04.665088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.774 [2024-07-24 19:05:04.677971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.774 [2024-07-24 19:05:04.678001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.774 [2024-07-24 19:05:04.690533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.774 [2024-07-24 19:05:04.690563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.774 [2024-07-24 19:05:04.703563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.774 [2024-07-24 19:05:04.703592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.774 [2024-07-24 19:05:04.716894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.774 [2024-07-24 19:05:04.716924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.774 [2024-07-24 19:05:04.729682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.774 [2024-07-24 19:05:04.729712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.774 [2024-07-24 19:05:04.742298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.774 [2024-07-24 19:05:04.742327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.774 [2024-07-24 19:05:04.755518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.774 [2024-07-24 19:05:04.755547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.774 [2024-07-24 19:05:04.768379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.774 [2024-07-24 19:05:04.768414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.774 [2024-07-24 19:05:04.781413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.774 [2024-07-24 19:05:04.781444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.031 [2024-07-24 19:05:04.793999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.031 [2024-07-24 19:05:04.794030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.032 [2024-07-24 19:05:04.806688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.032 [2024-07-24 19:05:04.806718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.032 [2024-07-24 19:05:04.819289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.032 [2024-07-24 19:05:04.819319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.032 [2024-07-24 19:05:04.832054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.032 [2024-07-24 19:05:04.832084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.032 [2024-07-24 19:05:04.844827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.032 [2024-07-24 19:05:04.844878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.032 [2024-07-24 19:05:04.857750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.032 [2024-07-24 19:05:04.857780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.032 [2024-07-24 19:05:04.870778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.032 [2024-07-24 19:05:04.870808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.032 [2024-07-24 19:05:04.883576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.032 [2024-07-24 19:05:04.883606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.032 [2024-07-24 19:05:04.896011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.032 [2024-07-24 19:05:04.896041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.032 [2024-07-24 19:05:04.909012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.032 [2024-07-24 19:05:04.909042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.032 [2024-07-24 19:05:04.922160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.032 [2024-07-24 19:05:04.922189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.032 [2024-07-24 19:05:04.934887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.032 [2024-07-24 19:05:04.934916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.032 [2024-07-24 19:05:04.947966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.032 [2024-07-24 19:05:04.947995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.032 [2024-07-24 19:05:04.961061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.032 [2024-07-24 19:05:04.961090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.032 [2024-07-24 19:05:04.973545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.032 [2024-07-24 19:05:04.973574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.032 [2024-07-24 19:05:04.986044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.032 [2024-07-24 19:05:04.986074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.032 [2024-07-24 19:05:04.998354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.032 [2024-07-24 19:05:04.998383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.032 [2024-07-24 19:05:05.010971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.032 [2024-07-24 19:05:05.011001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.032 [2024-07-24 19:05:05.023830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.032 [2024-07-24 19:05:05.023860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.032 [2024-07-24 19:05:05.036392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.032 [2024-07-24 19:05:05.036421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.290 [2024-07-24 19:05:05.049523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.290 [2024-07-24 19:05:05.049553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.290 [2024-07-24 19:05:05.062647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.290 [2024-07-24 19:05:05.062678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.290 [2024-07-24 19:05:05.076145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.290 [2024-07-24 19:05:05.076175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.290 [2024-07-24 19:05:05.089116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.290 [2024-07-24 19:05:05.089154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.290 [2024-07-24 19:05:05.102358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.290 [2024-07-24 19:05:05.102395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.290 [2024-07-24 19:05:05.115148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.290 [2024-07-24 19:05:05.115178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.290 [2024-07-24 19:05:05.127599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.290 [2024-07-24 19:05:05.127629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.290 [2024-07-24 19:05:05.140119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.290 [2024-07-24 19:05:05.140148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.290 [2024-07-24 19:05:05.152711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.290 [2024-07-24 19:05:05.152740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.290 [2024-07-24 19:05:05.165452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.290 [2024-07-24 19:05:05.165488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.290 [2024-07-24 19:05:05.177702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.290 [2024-07-24 19:05:05.177738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.290 [2024-07-24 19:05:05.190400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.290 [2024-07-24 19:05:05.190430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.290 [2024-07-24 19:05:05.202816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.290 [2024-07-24 19:05:05.202846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.290 [2024-07-24 19:05:05.215695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.290 [2024-07-24 19:05:05.215724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.290 [2024-07-24 19:05:05.228540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.290 [2024-07-24 19:05:05.228570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.290 [2024-07-24 19:05:05.241261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.290 [2024-07-24 19:05:05.241299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.290 [2024-07-24 19:05:05.253974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.290 [2024-07-24 19:05:05.254003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.290 [2024-07-24 19:05:05.266894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.290 [2024-07-24 19:05:05.266924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.290 [2024-07-24 19:05:05.279379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.290 [2024-07-24 19:05:05.279408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.291 [2024-07-24 19:05:05.292012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.291 [2024-07-24 19:05:05.292041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.549 [2024-07-24 19:05:05.304720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.549 [2024-07-24 19:05:05.304757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.549 [2024-07-24 19:05:05.317243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.549 [2024-07-24 19:05:05.317273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.549 [2024-07-24 19:05:05.330085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.549 [2024-07-24 19:05:05.330122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.549 [2024-07-24 19:05:05.343181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.549 [2024-07-24 19:05:05.343211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.549 [2024-07-24 19:05:05.356257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.549 [2024-07-24 19:05:05.356286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.549 [2024-07-24 19:05:05.369160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.549 [2024-07-24 19:05:05.369189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.549 [2024-07-24 19:05:05.382039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.549 [2024-07-24 19:05:05.382068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.549 [2024-07-24 19:05:05.394837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.549 [2024-07-24 19:05:05.394867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.549 [2024-07-24 19:05:05.407267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.549 [2024-07-24 19:05:05.407297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.549 [2024-07-24 19:05:05.420074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.549 [2024-07-24 19:05:05.420104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.549 [2024-07-24 19:05:05.432583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.549 [2024-07-24 19:05:05.432613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.549 [2024-07-24 19:05:05.445502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.549 [2024-07-24 19:05:05.445532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.549 [2024-07-24 19:05:05.457803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.549 [2024-07-24 19:05:05.457834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.549 [2024-07-24 19:05:05.470717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.549 [2024-07-24 19:05:05.470747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.549 [2024-07-24 19:05:05.483493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.549 [2024-07-24 19:05:05.483532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.549 [2024-07-24 19:05:05.495877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.549 [2024-07-24 19:05:05.495907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.549 [2024-07-24 19:05:05.509006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.549 [2024-07-24 19:05:05.509036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.549 [2024-07-24 19:05:05.521979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.549 [2024-07-24 19:05:05.522009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.549 [2024-07-24 19:05:05.534664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.549 [2024-07-24 19:05:05.534695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.549 [2024-07-24 19:05:05.547534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.549 [2024-07-24 19:05:05.547564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.549 [2024-07-24 19:05:05.560585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.549 [2024-07-24 19:05:05.560615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.806 [2024-07-24 19:05:05.573697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.806 [2024-07-24 19:05:05.573727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.806 [2024-07-24 19:05:05.585942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.806 [2024-07-24 19:05:05.585972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.806 [2024-07-24 19:05:05.598452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.806 [2024-07-24 19:05:05.598490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.807 [2024-07-24 19:05:05.611163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.807 [2024-07-24 19:05:05.611192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.807 [2024-07-24 19:05:05.623868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.807 [2024-07-24 19:05:05.623897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.807 [2024-07-24 19:05:05.636506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.807 [2024-07-24 19:05:05.636535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.807 [2024-07-24 19:05:05.648717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.807 [2024-07-24 19:05:05.648746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.807 [2024-07-24 19:05:05.661625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.807 [2024-07-24 19:05:05.661654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.807 [2024-07-24 19:05:05.674153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.807 [2024-07-24 19:05:05.674182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.807 [2024-07-24 19:05:05.686770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.807 [2024-07-24 19:05:05.686800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.807 [2024-07-24 19:05:05.699684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.807 [2024-07-24 19:05:05.699713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.807 [2024-07-24 19:05:05.712107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.807 [2024-07-24 19:05:05.712136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.807 [2024-07-24 19:05:05.724878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.807 [2024-07-24 19:05:05.724907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.807 [2024-07-24 19:05:05.737325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.807 [2024-07-24 19:05:05.737355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.807 [2024-07-24 19:05:05.750160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.807 [2024-07-24 19:05:05.750190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.807 [2024-07-24 19:05:05.762882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.807 [2024-07-24 19:05:05.762912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.807 [2024-07-24 19:05:05.776096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.807 [2024-07-24 19:05:05.776126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.807 [2024-07-24 19:05:05.788628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.807 [2024-07-24 19:05:05.788657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.807 [2024-07-24 19:05:05.801202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.807 [2024-07-24 19:05:05.801250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.807 [2024-07-24 19:05:05.813741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.807 [2024-07-24 19:05:05.813771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.064 [2024-07-24 19:05:05.826697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.064 [2024-07-24 19:05:05.826728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.064 [2024-07-24 19:05:05.839634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.064 [2024-07-24 19:05:05.839662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.064 [2024-07-24 19:05:05.852371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.064 [2024-07-24 19:05:05.852400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.064 [2024-07-24 19:05:05.865265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.064 [2024-07-24 19:05:05.865294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.064 [2024-07-24 19:05:05.878302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.064 [2024-07-24 19:05:05.878332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.064 [2024-07-24 19:05:05.890732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.064 [2024-07-24 19:05:05.890761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.064 [2024-07-24 19:05:05.903283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.064 [2024-07-24 19:05:05.903312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.064 [2024-07-24 19:05:05.915892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.064 [2024-07-24 19:05:05.915921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.064 [2024-07-24 19:05:05.928453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.064 [2024-07-24 19:05:05.928490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.064 [2024-07-24 19:05:05.941491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.064 [2024-07-24 19:05:05.941521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.064 [2024-07-24 19:05:05.953965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.064 [2024-07-24 19:05:05.954003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.064 [2024-07-24 19:05:05.966942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.064 [2024-07-24 19:05:05.966972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.064 [2024-07-24 19:05:05.980078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.064 [2024-07-24 19:05:05.980107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.064 [2024-07-24 19:05:05.992928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.064 [2024-07-24 19:05:05.992958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.064 [2024-07-24 19:05:06.005693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.064 [2024-07-24 19:05:06.005723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.064 [2024-07-24 19:05:06.018571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.064 [2024-07-24 19:05:06.018600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.064 [2024-07-24 19:05:06.031025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.064 [2024-07-24 19:05:06.031055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.064 [2024-07-24 19:05:06.043592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.064 [2024-07-24 19:05:06.043641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.064 [2024-07-24 19:05:06.056126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.064 [2024-07-24 19:05:06.056155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.064 [2024-07-24 19:05:06.069024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.064 [2024-07-24 19:05:06.069059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.322 [2024-07-24 19:05:06.081685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.322 [2024-07-24 19:05:06.081717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.322 [2024-07-24 19:05:06.094298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.322 [2024-07-24 19:05:06.094328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.322 [2024-07-24 19:05:06.106710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.322 [2024-07-24 19:05:06.106740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.322 [2024-07-24 19:05:06.119548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.322 [2024-07-24 19:05:06.119577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.322 [2024-07-24 19:05:06.132584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.322 [2024-07-24 19:05:06.132613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.322 [2024-07-24 19:05:06.145581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.322 [2024-07-24 19:05:06.145612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.322 [2024-07-24 19:05:06.158631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.322 [2024-07-24 19:05:06.158661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.322 [2024-07-24 19:05:06.171180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.322 [2024-07-24 19:05:06.171213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.322 [2024-07-24 19:05:06.184042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.322 [2024-07-24 19:05:06.184077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.322 [2024-07-24 19:05:06.196680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.322 [2024-07-24 19:05:06.196710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.322 [2024-07-24 19:05:06.209525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.322 [2024-07-24 19:05:06.209555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.322 [2024-07-24 19:05:06.221991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.322 [2024-07-24 19:05:06.222021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.322 [2024-07-24 19:05:06.234217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.322 [2024-07-24 19:05:06.234250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.322 [2024-07-24 19:05:06.246764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.322 [2024-07-24 19:05:06.246794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.322 [2024-07-24 19:05:06.259971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.322 [2024-07-24 19:05:06.260004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.322 [2024-07-24 19:05:06.272568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.322 [2024-07-24 19:05:06.272605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.322 [2024-07-24 19:05:06.284651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.322 [2024-07-24 19:05:06.284692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.322 [2024-07-24 19:05:06.297331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.322 [2024-07-24 19:05:06.297360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.322 [2024-07-24 19:05:06.310081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.322 [2024-07-24 19:05:06.310111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.322 [2024-07-24 19:05:06.322417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.322 [2024-07-24 19:05:06.322446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.322 [2024-07-24 19:05:06.335297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.322 [2024-07-24 19:05:06.335327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.581 [2024-07-24 19:05:06.348189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.581 [2024-07-24 19:05:06.348218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.581 [2024-07-24 19:05:06.361141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.581 [2024-07-24 19:05:06.361171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.581 [2024-07-24 19:05:06.374120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.581 [2024-07-24 19:05:06.374150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.581 [2024-07-24 19:05:06.386770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.581 [2024-07-24 19:05:06.386799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.581 [2024-07-24 19:05:06.399470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.581 [2024-07-24 19:05:06.399508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.581 [2024-07-24 19:05:06.411606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.581 [2024-07-24 19:05:06.411635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.581 [2024-07-24 19:05:06.424600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.581 [2024-07-24 19:05:06.424632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.581 [2024-07-24 19:05:06.437603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.581 [2024-07-24 19:05:06.437632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.581 [2024-07-24 19:05:06.450462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.581 [2024-07-24 19:05:06.450505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.581 [2024-07-24 19:05:06.463185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.581 [2024-07-24 19:05:06.463223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.581 [2024-07-24 19:05:06.475975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.581 [2024-07-24 19:05:06.476008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.581 [2024-07-24 19:05:06.488926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.581 [2024-07-24 19:05:06.488959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.581 [2024-07-24 19:05:06.501607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.581 [2024-07-24 19:05:06.501636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.581 [2024-07-24 19:05:06.514226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.581 [2024-07-24 19:05:06.514255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.581 [2024-07-24 19:05:06.527192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.581 [2024-07-24 19:05:06.527237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.581 [2024-07-24 19:05:06.539970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.581 [2024-07-24 19:05:06.540001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.581 [2024-07-24 19:05:06.553108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.581 [2024-07-24 19:05:06.553138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.581 [2024-07-24 19:05:06.565742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.581 [2024-07-24 19:05:06.565772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.581 [2024-07-24 19:05:06.578871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.581 [2024-07-24 19:05:06.578900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.581 [2024-07-24 19:05:06.591818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.581 [2024-07-24 19:05:06.591848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.839 [2024-07-24 19:05:06.604883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.840 [2024-07-24 19:05:06.604913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.840 [2024-07-24 19:05:06.617962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.840 [2024-07-24 19:05:06.617992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.840 [2024-07-24 19:05:06.631042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.840 [2024-07-24 19:05:06.631072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.840 [2024-07-24 19:05:06.644161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.840 [2024-07-24 19:05:06.644191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.840 [2024-07-24 19:05:06.657033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.840 [2024-07-24 19:05:06.657063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.840 [2024-07-24 19:05:06.669822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.840 [2024-07-24 19:05:06.669852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.840 [2024-07-24 19:05:06.682424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.840 [2024-07-24 19:05:06.682461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.840 [2024-07-24 19:05:06.695119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.840 [2024-07-24 19:05:06.695148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.840 [2024-07-24 19:05:06.708041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.840 [2024-07-24 19:05:06.708070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.840 [2024-07-24 19:05:06.720682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.840 [2024-07-24 19:05:06.720711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.840 [2024-07-24 19:05:06.733301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.840 [2024-07-24 19:05:06.733330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.840 [2024-07-24 19:05:06.745988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.840 [2024-07-24 19:05:06.746018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.840 [2024-07-24 19:05:06.758646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.840 [2024-07-24 19:05:06.758676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.840 [2024-07-24 19:05:06.771749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.840 [2024-07-24 19:05:06.771788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.840 [2024-07-24 19:05:06.784863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.840 [2024-07-24 19:05:06.784892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.840 [2024-07-24 19:05:06.797712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.840 [2024-07-24 19:05:06.797741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.840 [2024-07-24 19:05:06.811062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.840 [2024-07-24 19:05:06.811091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.840 [2024-07-24 19:05:06.823939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.840 [2024-07-24 19:05:06.823968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.840 [2024-07-24 19:05:06.836571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.840 [2024-07-24 19:05:06.836600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.840 [2024-07-24 19:05:06.849421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.840 [2024-07-24 19:05:06.849457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.099 [2024-07-24 19:05:06.862516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.099 [2024-07-24 19:05:06.862545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.099 [2024-07-24 19:05:06.875068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.099 [2024-07-24 19:05:06.875097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.099 [2024-07-24 19:05:06.887997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.099 [2024-07-24 19:05:06.888027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.099 [2024-07-24 19:05:06.900638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.099 [2024-07-24 19:05:06.900667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.099 [2024-07-24 19:05:06.913285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.099 [2024-07-24 19:05:06.913314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.099 [2024-07-24 19:05:06.926153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.099 [2024-07-24 19:05:06.926182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.099 [2024-07-24 19:05:06.938603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.099 [2024-07-24 19:05:06.938631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.099 [2024-07-24 19:05:06.951189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.099 [2024-07-24 19:05:06.951219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.099 [2024-07-24 19:05:06.964533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.099 [2024-07-24 19:05:06.964570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.099 [2024-07-24 19:05:06.977542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.099 [2024-07-24 19:05:06.977572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.099 [2024-07-24 19:05:06.990921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.099 [2024-07-24 19:05:06.990951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.099 [2024-07-24 19:05:07.003671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.099 [2024-07-24 19:05:07.003700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.099 [2024-07-24 19:05:07.017006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.099 [2024-07-24 19:05:07.017036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.099 [2024-07-24 19:05:07.029662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.099 [2024-07-24 19:05:07.029691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.099 [2024-07-24 19:05:07.042469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.099 [2024-07-24 19:05:07.042507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.099 [2024-07-24 19:05:07.055370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.099 [2024-07-24 19:05:07.055399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.099 [2024-07-24 19:05:07.067939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.099 [2024-07-24 19:05:07.067969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.099 [2024-07-24 19:05:07.080676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.099 [2024-07-24 19:05:07.080705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.099 [2024-07-24 19:05:07.093899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.099 [2024-07-24 19:05:07.093930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.099 [2024-07-24 19:05:07.106827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.099 [2024-07-24 19:05:07.106857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.358 [2024-07-24 19:05:07.120092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.358 [2024-07-24 19:05:07.120123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.358 [2024-07-24 19:05:07.133434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.358 [2024-07-24 19:05:07.133463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.358 [2024-07-24 19:05:07.146336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.358 [2024-07-24 19:05:07.146373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.358 [2024-07-24 19:05:07.159302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.358 [2024-07-24 19:05:07.159333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.358 [2024-07-24 19:05:07.172166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.358 [2024-07-24 19:05:07.172195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.358 [2024-07-24 19:05:07.185108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.358 [2024-07-24 19:05:07.185139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.358 [2024-07-24 19:05:07.197768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.358 [2024-07-24 19:05:07.197797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.358 [2024-07-24 19:05:07.210828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.358 [2024-07-24 19:05:07.210858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.358 [2024-07-24 19:05:07.223398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.358 [2024-07-24 19:05:07.223427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.358 [2024-07-24 19:05:07.236477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.358 [2024-07-24 19:05:07.236513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.358 [2024-07-24 19:05:07.248837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.358 [2024-07-24 19:05:07.248868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.358 [2024-07-24 19:05:07.261579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.358 [2024-07-24 19:05:07.261609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.358 [2024-07-24 19:05:07.273926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.358 [2024-07-24 19:05:07.273955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.358 [2024-07-24 19:05:07.286489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.358 [2024-07-24 19:05:07.286519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.358 [2024-07-24 19:05:07.298995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.358 [2024-07-24 19:05:07.299025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.358 [2024-07-24 19:05:07.311849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.358 [2024-07-24 19:05:07.311878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.358 [2024-07-24 19:05:07.324764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.358 [2024-07-24 19:05:07.324794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.358 [2024-07-24 19:05:07.337272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.358 [2024-07-24 19:05:07.337302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.358 [2024-07-24 19:05:07.350286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.358 [2024-07-24 19:05:07.350316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.358 [2024-07-24 19:05:07.363001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.358 [2024-07-24 19:05:07.363030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.616 [2024-07-24 19:05:07.376381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.616 [2024-07-24 19:05:07.376412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.616 [2024-07-24 19:05:07.389377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.616 [2024-07-24 19:05:07.389407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.616 [2024-07-24 19:05:07.401822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.617 [2024-07-24 19:05:07.401851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.617 [2024-07-24 19:05:07.414859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.617 [2024-07-24 19:05:07.414888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.617 [2024-07-24 19:05:07.428078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.617 [2024-07-24 19:05:07.428109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.617 [2024-07-24 19:05:07.440676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.617 [2024-07-24 19:05:07.440708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.617 [2024-07-24 19:05:07.453398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.617 [2024-07-24 19:05:07.453427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.617 [2024-07-24 19:05:07.466060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.617 [2024-07-24 19:05:07.466092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.617 [2024-07-24 19:05:07.478919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.617 [2024-07-24 19:05:07.478956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.617 [2024-07-24 19:05:07.491367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.617 [2024-07-24 19:05:07.491396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.617 [2024-07-24 19:05:07.504139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.617 [2024-07-24 19:05:07.504175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.617 [2024-07-24 19:05:07.516618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.617 [2024-07-24 19:05:07.516647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.617 [2024-07-24 19:05:07.529697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.617 [2024-07-24 19:05:07.529726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.617 [2024-07-24 19:05:07.542542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.617 [2024-07-24 19:05:07.542574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.617 [2024-07-24 19:05:07.555310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.617 [2024-07-24 19:05:07.555340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.617 [2024-07-24 19:05:07.568218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.617 [2024-07-24 19:05:07.568255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.617 [2024-07-24 19:05:07.580977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.617 [2024-07-24 19:05:07.581007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.617 [2024-07-24 19:05:07.594147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.617 [2024-07-24 19:05:07.594176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.617 [2024-07-24 19:05:07.606999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.617 [2024-07-24 19:05:07.607027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.617 [2024-07-24 19:05:07.619342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.617 [2024-07-24 19:05:07.619371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.875 [2024-07-24 19:05:07.632181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.875 [2024-07-24 19:05:07.632219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.875 [2024-07-24 19:05:07.645159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.875 [2024-07-24 19:05:07.645189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.875 [2024-07-24 19:05:07.657682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.875 [2024-07-24 19:05:07.657712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.875 [2024-07-24 19:05:07.670288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.875 [2024-07-24 19:05:07.670322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.875 [2024-07-24 19:05:07.682664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.875 [2024-07-24 19:05:07.682694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.875 [2024-07-24 19:05:07.695489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.875 [2024-07-24 19:05:07.695519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.875 [2024-07-24 19:05:07.708199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.875 [2024-07-24 19:05:07.708229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.875 [2024-07-24 19:05:07.720607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.875 [2024-07-24 19:05:07.720637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.875 [2024-07-24 19:05:07.733267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.875 [2024-07-24 19:05:07.733308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.875 [2024-07-24 19:05:07.745851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.875 [2024-07-24 19:05:07.745881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.875 [2024-07-24 19:05:07.758536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.875 [2024-07-24 19:05:07.758573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.875 [2024-07-24 19:05:07.771197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.875 [2024-07-24 19:05:07.771227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.875 [2024-07-24 19:05:07.783764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.875 [2024-07-24 19:05:07.783793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.875 [2024-07-24 19:05:07.796271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.875 [2024-07-24 19:05:07.796301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.875 [2024-07-24 19:05:07.808759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.875 [2024-07-24 19:05:07.808788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.875 [2024-07-24 19:05:07.821413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.875 [2024-07-24 19:05:07.821442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.875 [2024-07-24 19:05:07.833631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.875 [2024-07-24 19:05:07.833677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.876 [2024-07-24 19:05:07.846370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.876 [2024-07-24 19:05:07.846401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.876 [2024-07-24 19:05:07.858701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.876 [2024-07-24 19:05:07.858730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.876 [2024-07-24 19:05:07.871709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.876 [2024-07-24 19:05:07.871738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.876 [2024-07-24 19:05:07.884183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.876 [2024-07-24 19:05:07.884212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.135 [2024-07-24 19:05:07.897254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.135 [2024-07-24 19:05:07.897292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.135 [2024-07-24 19:05:07.910297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.135 [2024-07-24 19:05:07.910327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.135 [2024-07-24 19:05:07.923107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.135 [2024-07-24 19:05:07.923140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.135 [2024-07-24 19:05:07.935915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.135 [2024-07-24 19:05:07.935947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.135 [2024-07-24 19:05:07.948261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.135 [2024-07-24 19:05:07.948298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.135 [2024-07-24 19:05:07.961008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.135 [2024-07-24 19:05:07.961038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.135 [2024-07-24 19:05:07.973627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.135 [2024-07-24 19:05:07.973667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.135 [2024-07-24 19:05:07.986449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.135 [2024-07-24 19:05:07.986478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.135 [2024-07-24 19:05:07.999015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.135 [2024-07-24 19:05:07.999044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.135 [2024-07-24 19:05:08.011472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.135 [2024-07-24 19:05:08.011509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.135 [2024-07-24 19:05:08.024247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.135 [2024-07-24 19:05:08.024276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.135 [2024-07-24 19:05:08.036871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.135 [2024-07-24 19:05:08.036901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.135 [2024-07-24 19:05:08.049646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.135 [2024-07-24 19:05:08.049675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.135 [2024-07-24 19:05:08.062097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.135 [2024-07-24 19:05:08.062126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.135 [2024-07-24 19:05:08.074533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.135 [2024-07-24 19:05:08.074562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.135 [2024-07-24 19:05:08.086981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.135 [2024-07-24 19:05:08.087010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.135 [2024-07-24 19:05:08.099518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.135 [2024-07-24 19:05:08.099548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.135 [2024-07-24 19:05:08.111888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.135 [2024-07-24 19:05:08.111919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.135 [2024-07-24 19:05:08.124278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.135 [2024-07-24 19:05:08.124307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.135 [2024-07-24 19:05:08.136466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.135 [2024-07-24 19:05:08.136505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.393 [2024-07-24 19:05:08.149406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.393 [2024-07-24 19:05:08.149436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.393 [2024-07-24 19:05:08.162657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.393 [2024-07-24 19:05:08.162687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.393 [2024-07-24 19:05:08.175411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.393 [2024-07-24 19:05:08.175440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.393 [2024-07-24 19:05:08.187555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.393 [2024-07-24 19:05:08.187584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.393 [2024-07-24 19:05:08.200402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.393 [2024-07-24 19:05:08.200439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.393 [2024-07-24 19:05:08.213083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.394 [2024-07-24 19:05:08.213124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.394 [2024-07-24 19:05:08.225557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.394 [2024-07-24 19:05:08.225586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.394 [2024-07-24 19:05:08.238527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.394 [2024-07-24 19:05:08.238556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.394 [2024-07-24 19:05:08.251714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.394 [2024-07-24 19:05:08.251743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.394 [2024-07-24 19:05:08.264917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.394 [2024-07-24 19:05:08.264947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.394 [2024-07-24 19:05:08.277759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.394 [2024-07-24 19:05:08.277788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.394 [2024-07-24 19:05:08.290830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.394 [2024-07-24 19:05:08.290860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.394 [2024-07-24 19:05:08.303589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.394 [2024-07-24 19:05:08.303619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.394 [2024-07-24 19:05:08.316105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.394 [2024-07-24 19:05:08.316136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.394 [2024-07-24 19:05:08.329016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.394 [2024-07-24 19:05:08.329046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.394 [2024-07-24 19:05:08.342035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.394 [2024-07-24 19:05:08.342064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.394 [2024-07-24 19:05:08.354688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.394 [2024-07-24 19:05:08.354719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.394 [2024-07-24 19:05:08.367361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.394 [2024-07-24 19:05:08.367390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.394 [2024-07-24 19:05:08.380433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.394 [2024-07-24 19:05:08.380462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.394 [2024-07-24 19:05:08.392998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.394 [2024-07-24 19:05:08.393027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.394 [2024-07-24 19:05:08.405807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.394 [2024-07-24 19:05:08.405836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.653 [2024-07-24 19:05:08.418900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.653 [2024-07-24 19:05:08.418928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.653 [2024-07-24 19:05:08.431886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.653 [2024-07-24 19:05:08.431915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.653 [2024-07-24 19:05:08.444804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.653 [2024-07-24 19:05:08.444836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.653 [2024-07-24 19:05:08.457815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.653 [2024-07-24 19:05:08.457860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.653 [2024-07-24 19:05:08.470605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.653 [2024-07-24 19:05:08.470634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.653 [2024-07-24 19:05:08.483410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.653 [2024-07-24 19:05:08.483439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.653 [2024-07-24 19:05:08.496064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.653 [2024-07-24 19:05:08.496093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.653 [2024-07-24 19:05:08.508562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.653 [2024-07-24 19:05:08.508594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.653 [2024-07-24 19:05:08.521250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.653 [2024-07-24 19:05:08.521279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.653 [2024-07-24 19:05:08.534150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.653 [2024-07-24 19:05:08.534183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.653 [2024-07-24 19:05:08.546703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.653 [2024-07-24 19:05:08.546733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.653 [2024-07-24 19:05:08.559134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.653 [2024-07-24 19:05:08.559163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.653 [2024-07-24 19:05:08.571967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.653 [2024-07-24 19:05:08.571996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.653 [2024-07-24 19:05:08.585092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.653 [2024-07-24 19:05:08.585121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.653 [2024-07-24 19:05:08.597865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.653 [2024-07-24 19:05:08.597895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.653 [2024-07-24 19:05:08.610637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.653 [2024-07-24 19:05:08.610666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.653 [2024-07-24 19:05:08.624045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.653 [2024-07-24 19:05:08.624074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.653 [2024-07-24 19:05:08.636942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.653 [2024-07-24 19:05:08.636971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.653 [2024-07-24 19:05:08.649875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.653 [2024-07-24 19:05:08.649904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.653 [2024-07-24 19:05:08.662787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.653 [2024-07-24 19:05:08.662816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.912 [2024-07-24 19:05:08.675406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.912 [2024-07-24 19:05:08.675437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.912 [2024-07-24 19:05:08.687989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.912 [2024-07-24 19:05:08.688019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.912 [2024-07-24 19:05:08.700708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.912 [2024-07-24 19:05:08.700752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.912 [2024-07-24 19:05:08.713514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.912 [2024-07-24 19:05:08.713544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.912 [2024-07-24 19:05:08.726223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.912 [2024-07-24 19:05:08.726252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.912 [2024-07-24 19:05:08.739149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.912 [2024-07-24 19:05:08.739184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.912 [2024-07-24 19:05:08.752049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.912 [2024-07-24 19:05:08.752079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.912 [2024-07-24 19:05:08.764762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.912 [2024-07-24 19:05:08.764793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.912 [2024-07-24 19:05:08.777348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.912 [2024-07-24 19:05:08.777382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.912 [2024-07-24 19:05:08.789990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.912 [2024-07-24 19:05:08.790020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.912 [2024-07-24 19:05:08.802621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.912 [2024-07-24 19:05:08.802651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.912 [2024-07-24 19:05:08.815287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.912 [2024-07-24 19:05:08.815321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.912 [2024-07-24 19:05:08.827940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.912 [2024-07-24 19:05:08.827970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.912 [2024-07-24 19:05:08.840467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.912 [2024-07-24 19:05:08.840505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.912 [2024-07-24 19:05:08.853891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.912 [2024-07-24 19:05:08.853922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.912 [2024-07-24 19:05:08.866710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.912 [2024-07-24 19:05:08.866741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.912 [2024-07-24 19:05:08.879671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.912 [2024-07-24 19:05:08.879700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.912 [2024-07-24 19:05:08.892709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.912 [2024-07-24 19:05:08.892738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.912 [2024-07-24 19:05:08.905197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.912 [2024-07-24 19:05:08.905233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.912 [2024-07-24 19:05:08.917732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.912 [2024-07-24 19:05:08.917761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.192 [2024-07-24 19:05:08.930390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.192 [2024-07-24 19:05:08.930426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.192 [2024-07-24 19:05:08.942822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.192 [2024-07-24 19:05:08.942851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.192 [2024-07-24 19:05:08.955294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.192 [2024-07-24 19:05:08.955323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.192 [2024-07-24 19:05:08.967847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.192 [2024-07-24 19:05:08.967876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.192 [2024-07-24 19:05:08.980988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.192 [2024-07-24 19:05:08.981017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.192 [2024-07-24 19:05:08.993991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.192 [2024-07-24 19:05:08.994020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.192 [2024-07-24 19:05:09.006634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.192 [2024-07-24 19:05:09.006663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.192 [2024-07-24 19:05:09.019606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.192 [2024-07-24 19:05:09.019635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.192 [2024-07-24 19:05:09.032397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.193 [2024-07-24 19:05:09.032426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.193 [2024-07-24 19:05:09.044988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.193 [2024-07-24 19:05:09.045017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.193 [2024-07-24 19:05:09.057432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.193 [2024-07-24 19:05:09.057461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.193 [2024-07-24 19:05:09.070289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.193 [2024-07-24 19:05:09.070319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.193 [2024-07-24 19:05:09.083014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.193 [2024-07-24 19:05:09.083047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.193 [2024-07-24 19:05:09.095275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.193 [2024-07-24 19:05:09.095304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.193 [2024-07-24 19:05:09.108120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.193 [2024-07-24 19:05:09.108150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.193 [2024-07-24 19:05:09.120872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.193 [2024-07-24 19:05:09.120902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.193 [2024-07-24 19:05:09.133466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.193 [2024-07-24 19:05:09.133512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.193 [2024-07-24 19:05:09.145914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.193 [2024-07-24 19:05:09.145943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.193 [2024-07-24 19:05:09.158519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.193 [2024-07-24 19:05:09.158548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.193 [2024-07-24 19:05:09.171109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.193 [2024-07-24 19:05:09.171140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.193 [2024-07-24 19:05:09.183609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.193 [2024-07-24 19:05:09.183639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.520 [2024-07-24 19:05:09.196263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.520 [2024-07-24 19:05:09.196292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.520 [2024-07-24 19:05:09.208390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.520 [2024-07-24 19:05:09.208419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.520 [2024-07-24 19:05:09.220997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.520 [2024-07-24 19:05:09.221026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.520 [2024-07-24 19:05:09.233787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.520 [2024-07-24 19:05:09.233816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.520 [2024-07-24 19:05:09.245961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.520 [2024-07-24 19:05:09.245991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.520 [2024-07-24 19:05:09.258507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.520 [2024-07-24 19:05:09.258536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.520 [2024-07-24 19:05:09.271505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.520 [2024-07-24 19:05:09.271534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.520 [2024-07-24 19:05:09.283798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.520 [2024-07-24 19:05:09.283827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.520 [2024-07-24 19:05:09.296936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.520 [2024-07-24 19:05:09.296965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.520 [2024-07-24 19:05:09.309661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.520 [2024-07-24 19:05:09.309690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.520 [2024-07-24 19:05:09.322814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.520 [2024-07-24 19:05:09.322844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.520 [2024-07-24 19:05:09.335456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.520 [2024-07-24 19:05:09.335495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.520 [2024-07-24 19:05:09.347851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.520 [2024-07-24 19:05:09.347880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.520 [2024-07-24 19:05:09.360785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.520 [2024-07-24 19:05:09.360814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.520 [2024-07-24 19:05:09.373437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.520 [2024-07-24 19:05:09.373467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.520 [2024-07-24 19:05:09.386143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.520 [2024-07-24 19:05:09.386175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.520 [2024-07-24 19:05:09.401653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.520 [2024-07-24 19:05:09.401691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.520 [2024-07-24 19:05:09.414279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.520 [2024-07-24 19:05:09.414309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.520 [2024-07-24 19:05:09.427631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.520 [2024-07-24 19:05:09.427661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.521 [2024-07-24 19:05:09.440195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.521 [2024-07-24 19:05:09.440225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.521 [2024-07-24 19:05:09.453224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.521 [2024-07-24 19:05:09.453255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.521 [2024-07-24 19:05:09.466288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.521 [2024-07-24 19:05:09.466328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.521 [2024-07-24 19:05:09.479404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.521 [2024-07-24 19:05:09.479434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.521 [2024-07-24 19:05:09.492603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.521 [2024-07-24 19:05:09.492632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.521 [2024-07-24 19:05:09.505218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.521 [2024-07-24 19:05:09.505247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.781 [2024-07-24 19:05:09.518685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.781 [2024-07-24 19:05:09.518715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.781 [2024-07-24 19:05:09.531635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.781 [2024-07-24 19:05:09.531665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.781 [2024-07-24 19:05:09.544517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.781 [2024-07-24 19:05:09.544550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.781 [2024-07-24 19:05:09.556625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.781 [2024-07-24 19:05:09.556654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.781 00:08:03.781 Latency(us) 00:08:03.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:03.781 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:03.781 Nvme1n1 : 5.01 9954.47 77.77 0.00 0.00 12839.70 5995.33 24175.50 00:08:03.781 =================================================================================================================== 00:08:03.781 Total : 9954.47 77.77 0.00 0.00 12839.70 5995.33 24175.50 00:08:03.781 [2024-07-24 19:05:09.563236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.781 [2024-07-24 19:05:09.563263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.781 [2024-07-24 19:05:09.571255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.781 [2024-07-24 19:05:09.571282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.781 [2024-07-24 19:05:09.579282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.781 [2024-07-24 19:05:09.579311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.781 [2024-07-24 19:05:09.587388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.781 [2024-07-24 19:05:09.587451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.781 [2024-07-24 19:05:09.595409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.595501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 [2024-07-24 19:05:09.603432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.603505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 [2024-07-24 19:05:09.611444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.611516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 [2024-07-24 19:05:09.619467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.619549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 [2024-07-24 19:05:09.627503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.627562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 [2024-07-24 19:05:09.635517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.635575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 [2024-07-24 19:05:09.643539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.643597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 [2024-07-24 19:05:09.651533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.651582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 [2024-07-24 19:05:09.659501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.659534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 [2024-07-24 19:05:09.667551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.667584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 [2024-07-24 19:05:09.675555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.675583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 [2024-07-24 19:05:09.683572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.683599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 [2024-07-24 19:05:09.691698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.691760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 [2024-07-24 19:05:09.699706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.699763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 [2024-07-24 19:05:09.707634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.707658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 [2024-07-24 19:05:09.715676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.715708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 [2024-07-24 19:05:09.723693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.723721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 [2024-07-24 19:05:09.731708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.731734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 [2024-07-24 19:05:09.739802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.739864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 [2024-07-24 19:05:09.747834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.747909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 [2024-07-24 19:05:09.755766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.755788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 [2024-07-24 19:05:09.763789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.763813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 [2024-07-24 19:05:09.771813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.782 [2024-07-24 19:05:09.771837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2502532) - No such process 00:08:03.782 19:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2502532 00:08:03.782 19:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.782 19:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.782 19:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.782 19:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.782 19:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:03.782 19:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.782 19:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.782 delay0 00:08:03.782 19:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.782 19:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:03.782 19:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.782 19:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.040 19:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.040 19:05:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:04.040 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.040 [2024-07-24 19:05:09.894461] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:12.154 Initializing NVMe Controllers 00:08:12.154 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:12.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:12.154 Initialization complete. Launching workers. 00:08:12.154 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 243, failed: 20383 00:08:12.154 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 20508, failed to submit 118 00:08:12.154 success 20431, unsuccess 77, failed 0 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:12.154 rmmod nvme_tcp 00:08:12.154 rmmod nvme_fabrics 00:08:12.154 rmmod nvme_keyring 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2501507 ']' 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2501507 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2501507 ']' 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2501507 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2501507 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2501507' 00:08:12.154 killing process with pid 2501507 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2501507 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2501507 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.154 19:05:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.535 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:13.535 00:08:13.536 real 0m28.549s 00:08:13.536 user 0m41.880s 00:08:13.536 sys 0m8.777s 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.536 ************************************ 00:08:13.536 END TEST nvmf_zcopy 00:08:13.536 ************************************ 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:13.536 ************************************ 00:08:13.536 START TEST nvmf_nmic 00:08:13.536 ************************************ 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:13.536 * Looking for test storage... 00:08:13.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:13.536 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:13.795 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:13.795 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:13.795 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:13.795 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:13.795 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.795 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:13.795 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:13.795 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:13.795 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.795 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.795 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.795 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:13.795 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:13.795 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:08:13.795 19:05:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:15.173 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.173 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:08:15.173 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:15.173 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:15.173 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:15.173 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:15.173 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:15.173 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:08:15.173 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:15.173 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:08:15.173 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:08:15.173 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:08:15.173 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:08:15.173 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:08:15.173 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:08:15.173 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.173 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.173 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:15.432 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:15.432 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:15.432 Found net devices under 0000:08:00.0: cvl_0_0 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:15.432 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:15.433 Found net devices under 0000:08:00.1: cvl_0_1 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:15.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:08:15.433 00:08:15.433 --- 10.0.0.2 ping statistics --- 00:08:15.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.433 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:08:15.433 00:08:15.433 --- 10.0.0.1 ping statistics --- 00:08:15.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.433 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2505237 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2505237 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2505237 ']' 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:15.433 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:15.433 [2024-07-24 19:05:21.395463] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:08:15.433 [2024-07-24 19:05:21.395579] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.433 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.691 [2024-07-24 19:05:21.462749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.691 [2024-07-24 19:05:21.585748] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.691 [2024-07-24 19:05:21.585817] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.691 [2024-07-24 19:05:21.585834] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.691 [2024-07-24 19:05:21.585846] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.691 [2024-07-24 19:05:21.585858] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.691 [2024-07-24 19:05:21.585941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.691 [2024-07-24 19:05:21.585994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.691 [2024-07-24 19:05:21.586283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.691 [2024-07-24 19:05:21.586287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.691 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:15.691 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:08:15.691 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:15.691 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:15.691 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:15.950 [2024-07-24 19:05:21.733814] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:15.950 Malloc0 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:15.950 [2024-07-24 19:05:21.784248] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:15.950 test case1: single bdev can't be used in multiple subsystems 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:15.950 [2024-07-24 19:05:21.808095] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:15.950 [2024-07-24 19:05:21.808126] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:15.950 [2024-07-24 19:05:21.808143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.950 request: 00:08:15.950 { 00:08:15.950 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:15.950 "namespace": { 00:08:15.950 "bdev_name": "Malloc0", 00:08:15.950 "no_auto_visible": false 00:08:15.950 }, 00:08:15.950 "method": "nvmf_subsystem_add_ns", 00:08:15.950 "req_id": 1 00:08:15.950 } 00:08:15.950 Got JSON-RPC error response 00:08:15.950 response: 00:08:15.950 { 00:08:15.950 "code": -32602, 00:08:15.950 "message": "Invalid parameters" 00:08:15.950 } 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:15.950 Adding namespace failed - expected result. 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:15.950 test case2: host connect to nvmf target in multiple paths 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:15.950 [2024-07-24 19:05:21.816200] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.950 19:05:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:16.515 19:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:16.774 19:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:16.774 19:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:08:16.774 19:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:16.774 19:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:16.774 19:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:08:19.299 19:05:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:19.299 19:05:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:19.300 19:05:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:19.300 19:05:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:19.300 19:05:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:19.300 19:05:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:08:19.300 19:05:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:19.300 [global] 00:08:19.300 thread=1 00:08:19.300 invalidate=1 00:08:19.300 rw=write 00:08:19.300 time_based=1 00:08:19.300 runtime=1 00:08:19.300 ioengine=libaio 00:08:19.300 direct=1 00:08:19.300 bs=4096 00:08:19.300 iodepth=1 00:08:19.300 norandommap=0 00:08:19.300 numjobs=1 00:08:19.300 00:08:19.300 verify_dump=1 00:08:19.300 verify_backlog=512 00:08:19.300 verify_state_save=0 00:08:19.300 do_verify=1 00:08:19.300 verify=crc32c-intel 00:08:19.300 [job0] 00:08:19.300 filename=/dev/nvme0n1 00:08:19.300 Could not set queue depth (nvme0n1) 00:08:19.300 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:19.300 fio-3.35 00:08:19.300 Starting 1 thread 00:08:20.233 00:08:20.233 job0: (groupid=0, jobs=1): err= 0: pid=2505638: Wed Jul 24 19:05:26 2024 00:08:20.233 read: IOPS=1972, BW=7888KiB/s (8077kB/s)(7896KiB/1001msec) 00:08:20.233 slat (nsec): min=5504, max=36983, avg=11334.63, stdev=3569.52 00:08:20.233 clat (usec): min=203, max=41938, avg=274.34, stdev=1320.14 00:08:20.233 lat (usec): min=210, max=41969, avg=285.67, stdev=1320.54 00:08:20.233 clat percentiles (usec): 00:08:20.233 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 223], 00:08:20.233 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 231], 60.00th=[ 233], 00:08:20.233 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 258], 00:08:20.233 | 99.00th=[ 289], 99.50th=[ 318], 99.90th=[41681], 99.95th=[41681], 00:08:20.233 | 99.99th=[41681] 00:08:20.233 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:20.233 slat (usec): min=7, max=34634, avg=33.00, stdev=764.98 00:08:20.233 clat (usec): min=144, max=317, avg=173.00, stdev=16.79 00:08:20.233 lat (usec): min=153, max=34927, avg=206.01, stdev=767.80 00:08:20.233 clat percentiles (usec): 00:08:20.233 | 1.00th=[ 153], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:08:20.233 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:08:20.233 | 70.00th=[ 180], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 202], 00:08:20.233 | 99.00th=[ 221], 99.50th=[ 227], 99.90th=[ 302], 99.95th=[ 310], 00:08:20.233 | 99.99th=[ 318] 00:08:20.233 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:08:20.233 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:20.233 lat (usec) : 250=95.72%, 500=4.23% 00:08:20.233 lat (msec) : 50=0.05% 00:08:20.233 cpu : usr=3.00%, sys=5.70%, ctx=4027, majf=0, minf=1 00:08:20.233 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:20.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:20.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:20.233 issued rwts: total=1974,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:20.233 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:20.233 00:08:20.233 Run status group 0 (all jobs): 00:08:20.233 READ: bw=7888KiB/s (8077kB/s), 7888KiB/s-7888KiB/s (8077kB/s-8077kB/s), io=7896KiB (8086kB), run=1001-1001msec 00:08:20.233 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:08:20.233 00:08:20.233 Disk stats (read/write): 00:08:20.233 nvme0n1: ios=1598/2048, merge=0/0, ticks=1432/335, in_queue=1767, util=98.90% 00:08:20.233 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:20.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:20.233 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:20.233 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:08:20.233 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:20.233 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:20.233 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:20.233 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:20.233 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:08:20.233 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:20.233 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:20.233 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:20.233 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:08:20.233 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:20.233 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:08:20.233 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:20.233 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:20.490 rmmod nvme_tcp 00:08:20.490 rmmod nvme_fabrics 00:08:20.490 rmmod nvme_keyring 00:08:20.490 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:20.490 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:08:20.490 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:08:20.490 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2505237 ']' 00:08:20.490 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2505237 00:08:20.490 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2505237 ']' 00:08:20.490 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2505237 00:08:20.490 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:08:20.490 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.490 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2505237 00:08:20.490 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:20.490 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:20.490 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2505237' 00:08:20.490 killing process with pid 2505237 00:08:20.490 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2505237 00:08:20.490 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2505237 00:08:20.750 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:20.750 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:20.750 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:20.750 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:20.750 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:20.750 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.750 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.750 19:05:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.660 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:22.660 00:08:22.660 real 0m9.137s 00:08:22.660 user 0m19.910s 00:08:22.660 sys 0m2.296s 00:08:22.660 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.660 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:22.660 ************************************ 00:08:22.660 END TEST nvmf_nmic 00:08:22.660 ************************************ 00:08:22.660 19:05:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:22.660 19:05:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:22.660 19:05:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.660 19:05:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:22.660 ************************************ 00:08:22.660 START TEST nvmf_fio_target 00:08:22.660 ************************************ 00:08:22.660 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:22.918 * Looking for test storage... 00:08:22.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.918 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.918 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:22.918 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:08:22.919 19:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:24.823 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:24.823 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:24.823 Found net devices under 0000:08:00.0: cvl_0_0 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.823 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:24.824 Found net devices under 0000:08:00.1: cvl_0_1 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:24.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:24.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:08:24.824 00:08:24.824 --- 10.0.0.2 ping statistics --- 00:08:24.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.824 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:24.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:24.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:08:24.824 00:08:24.824 --- 10.0.0.1 ping statistics --- 00:08:24.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.824 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2507244 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2507244 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2507244 ']' 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:24.824 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:24.824 [2024-07-24 19:05:30.627625] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:08:24.824 [2024-07-24 19:05:30.627719] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.824 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.824 [2024-07-24 19:05:30.693460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.824 [2024-07-24 19:05:30.810715] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.824 [2024-07-24 19:05:30.810778] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.824 [2024-07-24 19:05:30.810794] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.824 [2024-07-24 19:05:30.810807] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.824 [2024-07-24 19:05:30.810819] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.824 [2024-07-24 19:05:30.810953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.824 [2024-07-24 19:05:30.811038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.824 [2024-07-24 19:05:30.811089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.824 [2024-07-24 19:05:30.811091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.082 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:25.082 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:08:25.082 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:25.082 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:25.082 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:25.082 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.082 19:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:25.340 [2024-07-24 19:05:31.233549] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.340 19:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:25.599 19:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:25.599 19:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:26.165 19:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:26.165 19:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:26.424 19:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:26.424 19:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:26.682 19:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:26.682 19:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:26.940 19:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:27.198 19:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:27.198 19:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:27.455 19:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:27.455 19:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:27.713 19:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:27.713 19:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:27.970 19:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:28.227 19:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:28.227 19:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:28.484 19:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:28.484 19:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:28.742 19:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:28.999 [2024-07-24 19:05:34.839128] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.999 19:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:29.256 19:05:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:29.513 19:05:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:30.079 19:05:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:30.079 19:05:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:08:30.079 19:05:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:30.079 19:05:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:08:30.079 19:05:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:08:30.079 19:05:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:08:31.976 19:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:31.976 19:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:31.976 19:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:31.976 19:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:08:31.976 19:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:31.976 19:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:08:31.976 19:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:31.976 [global] 00:08:31.976 thread=1 00:08:31.976 invalidate=1 00:08:31.976 rw=write 00:08:31.976 time_based=1 00:08:31.976 runtime=1 00:08:31.976 ioengine=libaio 00:08:31.976 direct=1 00:08:31.976 bs=4096 00:08:31.976 iodepth=1 00:08:31.976 norandommap=0 00:08:31.976 numjobs=1 00:08:31.976 00:08:31.976 verify_dump=1 00:08:31.976 verify_backlog=512 00:08:31.976 verify_state_save=0 00:08:31.976 do_verify=1 00:08:31.976 verify=crc32c-intel 00:08:31.976 [job0] 00:08:31.976 filename=/dev/nvme0n1 00:08:31.976 [job1] 00:08:31.976 filename=/dev/nvme0n2 00:08:31.976 [job2] 00:08:31.976 filename=/dev/nvme0n3 00:08:31.976 [job3] 00:08:31.976 filename=/dev/nvme0n4 00:08:31.976 Could not set queue depth (nvme0n1) 00:08:31.976 Could not set queue depth (nvme0n2) 00:08:31.976 Could not set queue depth (nvme0n3) 00:08:31.976 Could not set queue depth (nvme0n4) 00:08:32.234 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:32.234 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:32.234 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:32.234 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:32.234 fio-3.35 00:08:32.234 Starting 4 threads 00:08:33.607 00:08:33.607 job0: (groupid=0, jobs=1): err= 0: pid=2508077: Wed Jul 24 19:05:39 2024 00:08:33.607 read: IOPS=30, BW=123KiB/s (126kB/s)(124KiB/1007msec) 00:08:33.607 slat (nsec): min=6777, max=37113, avg=23900.94, stdev=8684.49 00:08:33.607 clat (usec): min=377, max=42036, avg=28097.29, stdev=19412.98 00:08:33.607 lat (usec): min=394, max=42053, avg=28121.19, stdev=19416.84 00:08:33.607 clat percentiles (usec): 00:08:33.607 | 1.00th=[ 379], 5.00th=[ 404], 10.00th=[ 429], 20.00th=[ 441], 00:08:33.607 | 30.00th=[ 449], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:33.607 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:08:33.607 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:33.607 | 99.99th=[42206] 00:08:33.607 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:08:33.607 slat (nsec): min=6551, max=44014, avg=8229.67, stdev=2626.56 00:08:33.607 clat (usec): min=161, max=502, avg=252.35, stdev=47.02 00:08:33.607 lat (usec): min=169, max=509, avg=260.58, stdev=47.71 00:08:33.607 clat percentiles (usec): 00:08:33.607 | 1.00th=[ 186], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 219], 00:08:33.607 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 243], 00:08:33.607 | 70.00th=[ 265], 80.00th=[ 302], 90.00th=[ 322], 95.00th=[ 338], 00:08:33.607 | 99.00th=[ 379], 99.50th=[ 424], 99.90th=[ 502], 99.95th=[ 502], 00:08:33.607 | 99.99th=[ 502] 00:08:33.607 bw ( KiB/s): min= 4096, max= 4096, per=34.57%, avg=4096.00, stdev= 0.00, samples=1 00:08:33.607 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:33.607 lat (usec) : 250=60.04%, 500=35.91%, 750=0.18% 00:08:33.607 lat (msec) : 50=3.87% 00:08:33.607 cpu : usr=0.20%, sys=0.50%, ctx=544, majf=0, minf=1 00:08:33.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:33.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.607 issued rwts: total=31,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:33.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:33.607 job1: (groupid=0, jobs=1): err= 0: pid=2508078: Wed Jul 24 19:05:39 2024 00:08:33.607 read: IOPS=555, BW=2221KiB/s (2274kB/s)(2252KiB/1014msec) 00:08:33.607 slat (nsec): min=6022, max=38038, avg=11491.43, stdev=5636.48 00:08:33.607 clat (usec): min=208, max=42282, avg=1378.59, stdev=6652.89 00:08:33.607 lat (usec): min=218, max=42299, avg=1390.08, stdev=6655.21 00:08:33.607 clat percentiles (usec): 00:08:33.607 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 235], 00:08:33.607 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:08:33.607 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 338], 95.00th=[ 412], 00:08:33.607 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:33.607 | 99.99th=[42206] 00:08:33.607 write: IOPS=1009, BW=4039KiB/s (4136kB/s)(4096KiB/1014msec); 0 zone resets 00:08:33.607 slat (nsec): min=6903, max=34714, avg=12356.77, stdev=4244.24 00:08:33.607 clat (usec): min=152, max=423, avg=208.19, stdev=38.95 00:08:33.607 lat (usec): min=167, max=441, avg=220.55, stdev=37.85 00:08:33.607 clat percentiles (usec): 00:08:33.607 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 180], 00:08:33.607 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 202], 00:08:33.607 | 70.00th=[ 208], 80.00th=[ 243], 90.00th=[ 265], 95.00th=[ 293], 00:08:33.607 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 355], 99.95th=[ 424], 00:08:33.607 | 99.99th=[ 424] 00:08:33.607 bw ( KiB/s): min= 1528, max= 6664, per=34.57%, avg=4096.00, stdev=3631.70, samples=2 00:08:33.607 iops : min= 382, max= 1666, avg=1024.00, stdev=907.93, samples=2 00:08:33.607 lat (usec) : 250=64.90%, 500=34.15% 00:08:33.607 lat (msec) : 50=0.95% 00:08:33.607 cpu : usr=0.69%, sys=2.17%, ctx=1590, majf=0, minf=1 00:08:33.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:33.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.607 issued rwts: total=563,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:33.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:33.607 job2: (groupid=0, jobs=1): err= 0: pid=2508079: Wed Jul 24 19:05:39 2024 00:08:33.607 read: IOPS=507, BW=2032KiB/s (2080kB/s)(2060KiB/1014msec) 00:08:33.607 slat (nsec): min=6266, max=58326, avg=15455.08, stdev=7357.62 00:08:33.607 clat (usec): min=219, max=42292, avg=1516.58, stdev=6902.58 00:08:33.607 lat (usec): min=225, max=42307, avg=1532.03, stdev=6904.20 00:08:33.607 clat percentiles (usec): 00:08:33.607 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 251], 00:08:33.607 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 285], 00:08:33.607 | 70.00th=[ 302], 80.00th=[ 343], 90.00th=[ 363], 95.00th=[ 400], 00:08:33.607 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:08:33.607 | 99.99th=[42206] 00:08:33.607 write: IOPS=1009, BW=4039KiB/s (4136kB/s)(4096KiB/1014msec); 0 zone resets 00:08:33.607 slat (nsec): min=6504, max=38089, avg=13388.11, stdev=5197.89 00:08:33.607 clat (usec): min=159, max=321, avg=199.96, stdev=27.84 00:08:33.607 lat (usec): min=175, max=331, avg=213.34, stdev=26.89 00:08:33.607 clat percentiles (usec): 00:08:33.607 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:08:33.607 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 196], 60.00th=[ 206], 00:08:33.607 | 70.00th=[ 215], 80.00th=[ 225], 90.00th=[ 239], 95.00th=[ 251], 00:08:33.607 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 306], 99.95th=[ 322], 00:08:33.607 | 99.99th=[ 322] 00:08:33.607 bw ( KiB/s): min= 4096, max= 4096, per=34.57%, avg=4096.00, stdev= 0.00, samples=2 00:08:33.607 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:08:33.607 lat (usec) : 250=69.46%, 500=29.30%, 750=0.13% 00:08:33.607 lat (msec) : 10=0.06%, 20=0.06%, 50=0.97% 00:08:33.607 cpu : usr=1.78%, sys=2.17%, ctx=1539, majf=0, minf=1 00:08:33.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:33.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.607 issued rwts: total=515,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:33.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:33.607 job3: (groupid=0, jobs=1): err= 0: pid=2508080: Wed Jul 24 19:05:39 2024 00:08:33.607 read: IOPS=412, BW=1651KiB/s (1691kB/s)(1712KiB/1037msec) 00:08:33.607 slat (nsec): min=6159, max=42075, avg=12200.28, stdev=7547.99 00:08:33.607 clat (usec): min=220, max=41842, avg=2090.62, stdev=8398.68 00:08:33.607 lat (usec): min=227, max=41875, avg=2102.82, stdev=8401.66 00:08:33.607 clat percentiles (usec): 00:08:33.607 | 1.00th=[ 227], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:08:33.607 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 260], 60.00th=[ 269], 00:08:33.607 | 70.00th=[ 293], 80.00th=[ 355], 90.00th=[ 392], 95.00th=[ 537], 00:08:33.607 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:08:33.608 | 99.99th=[41681] 00:08:33.608 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:08:33.608 slat (nsec): min=7024, max=22835, avg=8770.65, stdev=1951.14 00:08:33.608 clat (usec): min=189, max=436, avg=251.42, stdev=43.42 00:08:33.608 lat (usec): min=196, max=458, avg=260.20, stdev=44.07 00:08:33.608 clat percentiles (usec): 00:08:33.608 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 219], 00:08:33.608 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 245], 00:08:33.608 | 70.00th=[ 273], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 330], 00:08:33.608 | 99.00th=[ 355], 99.50th=[ 371], 99.90th=[ 437], 99.95th=[ 437], 00:08:33.608 | 99.99th=[ 437] 00:08:33.608 bw ( KiB/s): min= 4096, max= 4096, per=34.57%, avg=4096.00, stdev= 0.00, samples=1 00:08:33.608 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:33.608 lat (usec) : 250=54.04%, 500=43.40%, 750=0.53% 00:08:33.608 lat (msec) : 50=2.02% 00:08:33.608 cpu : usr=0.48%, sys=1.25%, ctx=941, majf=0, minf=1 00:08:33.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:33.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.608 issued rwts: total=428,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:33.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:33.608 00:08:33.608 Run status group 0 (all jobs): 00:08:33.608 READ: bw=5929KiB/s (6071kB/s), 123KiB/s-2221KiB/s (126kB/s-2274kB/s), io=6148KiB (6296kB), run=1007-1037msec 00:08:33.608 WRITE: bw=11.6MiB/s (12.1MB/s), 1975KiB/s-4039KiB/s (2022kB/s-4136kB/s), io=12.0MiB (12.6MB), run=1007-1037msec 00:08:33.608 00:08:33.608 Disk stats (read/write): 00:08:33.608 nvme0n1: ios=77/512, merge=0/0, ticks=741/124, in_queue=865, util=87.17% 00:08:33.608 nvme0n2: ios=605/1024, merge=0/0, ticks=1317/217, in_queue=1534, util=98.47% 00:08:33.608 nvme0n3: ios=529/833, merge=0/0, ticks=1121/171, in_queue=1292, util=91.44% 00:08:33.608 nvme0n4: ios=413/512, merge=0/0, ticks=681/127, in_queue=808, util=89.68% 00:08:33.608 19:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:33.608 [global] 00:08:33.608 thread=1 00:08:33.608 invalidate=1 00:08:33.608 rw=randwrite 00:08:33.608 time_based=1 00:08:33.608 runtime=1 00:08:33.608 ioengine=libaio 00:08:33.608 direct=1 00:08:33.608 bs=4096 00:08:33.608 iodepth=1 00:08:33.608 norandommap=0 00:08:33.608 numjobs=1 00:08:33.608 00:08:33.608 verify_dump=1 00:08:33.608 verify_backlog=512 00:08:33.608 verify_state_save=0 00:08:33.608 do_verify=1 00:08:33.608 verify=crc32c-intel 00:08:33.608 [job0] 00:08:33.608 filename=/dev/nvme0n1 00:08:33.608 [job1] 00:08:33.608 filename=/dev/nvme0n2 00:08:33.608 [job2] 00:08:33.608 filename=/dev/nvme0n3 00:08:33.608 [job3] 00:08:33.608 filename=/dev/nvme0n4 00:08:33.608 Could not set queue depth (nvme0n1) 00:08:33.608 Could not set queue depth (nvme0n2) 00:08:33.608 Could not set queue depth (nvme0n3) 00:08:33.608 Could not set queue depth (nvme0n4) 00:08:33.608 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:33.608 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:33.608 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:33.608 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:33.608 fio-3.35 00:08:33.608 Starting 4 threads 00:08:34.988 00:08:34.988 job0: (groupid=0, jobs=1): err= 0: pid=2508268: Wed Jul 24 19:05:40 2024 00:08:34.988 read: IOPS=1366, BW=5467KiB/s (5598kB/s)(5472KiB/1001msec) 00:08:34.988 slat (nsec): min=6294, max=58540, avg=14140.50, stdev=5539.04 00:08:34.988 clat (usec): min=208, max=41945, avg=427.03, stdev=2481.19 00:08:34.988 lat (usec): min=215, max=41979, avg=441.17, stdev=2481.70 00:08:34.988 clat percentiles (usec): 00:08:34.988 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 233], 00:08:34.988 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 265], 00:08:34.988 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 433], 95.00th=[ 457], 00:08:34.988 | 99.00th=[ 519], 99.50th=[ 685], 99.90th=[41681], 99.95th=[42206], 00:08:34.988 | 99.99th=[42206] 00:08:34.988 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:08:34.988 slat (nsec): min=8431, max=71558, avg=20925.83, stdev=5328.49 00:08:34.988 clat (usec): min=172, max=530, avg=227.87, stdev=43.98 00:08:34.988 lat (usec): min=192, max=540, avg=248.80, stdev=44.70 00:08:34.988 clat percentiles (usec): 00:08:34.988 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:08:34.988 | 30.00th=[ 196], 40.00th=[ 204], 50.00th=[ 219], 60.00th=[ 229], 00:08:34.988 | 70.00th=[ 239], 80.00th=[ 265], 90.00th=[ 293], 95.00th=[ 310], 00:08:34.988 | 99.00th=[ 355], 99.50th=[ 371], 99.90th=[ 449], 99.95th=[ 529], 00:08:34.988 | 99.99th=[ 529] 00:08:34.988 bw ( KiB/s): min= 4096, max= 4096, per=34.07%, avg=4096.00, stdev= 0.00, samples=1 00:08:34.988 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:34.988 lat (usec) : 250=55.61%, 500=43.77%, 750=0.45% 00:08:34.988 lat (msec) : 50=0.17% 00:08:34.988 cpu : usr=3.50%, sys=7.40%, ctx=2905, majf=0, minf=1 00:08:34.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:34.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.988 issued rwts: total=1368,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:34.989 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:34.989 job1: (groupid=0, jobs=1): err= 0: pid=2508269: Wed Jul 24 19:05:40 2024 00:08:34.989 read: IOPS=29, BW=119KiB/s (121kB/s)(120KiB/1012msec) 00:08:34.989 slat (nsec): min=15077, max=42988, avg=28107.77, stdev=7435.70 00:08:34.989 clat (usec): min=395, max=41117, avg=28797.89, stdev=18873.63 00:08:34.989 lat (usec): min=431, max=41132, avg=28826.00, stdev=18872.45 00:08:34.989 clat percentiles (usec): 00:08:34.989 | 1.00th=[ 396], 5.00th=[ 408], 10.00th=[ 420], 20.00th=[ 474], 00:08:34.989 | 30.00th=[ 523], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:08:34.989 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:34.989 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:34.989 | 99.99th=[41157] 00:08:34.989 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:08:34.989 slat (nsec): min=7473, max=43770, avg=17693.63, stdev=5359.14 00:08:34.989 clat (usec): min=179, max=529, avg=264.61, stdev=55.35 00:08:34.989 lat (usec): min=196, max=558, avg=282.30, stdev=55.63 00:08:34.989 clat percentiles (usec): 00:08:34.989 | 1.00th=[ 188], 5.00th=[ 204], 10.00th=[ 217], 20.00th=[ 231], 00:08:34.989 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 255], 00:08:34.989 | 70.00th=[ 265], 80.00th=[ 289], 90.00th=[ 355], 95.00th=[ 375], 00:08:34.989 | 99.00th=[ 465], 99.50th=[ 515], 99.90th=[ 529], 99.95th=[ 529], 00:08:34.989 | 99.99th=[ 529] 00:08:34.989 bw ( KiB/s): min= 4096, max= 4096, per=34.07%, avg=4096.00, stdev= 0.00, samples=1 00:08:34.989 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:34.989 lat (usec) : 250=50.37%, 500=45.02%, 750=0.74% 00:08:34.989 lat (msec) : 50=3.87% 00:08:34.989 cpu : usr=0.69%, sys=0.69%, ctx=543, majf=0, minf=1 00:08:34.989 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:34.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.989 issued rwts: total=30,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:34.989 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:34.989 job2: (groupid=0, jobs=1): err= 0: pid=2508277: Wed Jul 24 19:05:40 2024 00:08:34.989 read: IOPS=81, BW=327KiB/s (335kB/s)(328KiB/1002msec) 00:08:34.989 slat (nsec): min=10568, max=72439, avg=28463.15, stdev=8888.38 00:08:34.989 clat (usec): min=293, max=41955, avg=10279.07, stdev=17433.67 00:08:34.989 lat (usec): min=307, max=41986, avg=10307.53, stdev=17432.67 00:08:34.989 clat percentiles (usec): 00:08:34.989 | 1.00th=[ 293], 5.00th=[ 351], 10.00th=[ 396], 20.00th=[ 416], 00:08:34.989 | 30.00th=[ 424], 40.00th=[ 433], 50.00th=[ 445], 60.00th=[ 461], 00:08:34.989 | 70.00th=[ 537], 80.00th=[40633], 90.00th=[40633], 95.00th=[41157], 00:08:34.989 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:34.989 | 99.99th=[42206] 00:08:34.989 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:08:34.989 slat (nsec): min=8269, max=51275, avg=22819.29, stdev=7467.91 00:08:34.989 clat (usec): min=211, max=811, avg=274.99, stdev=58.62 00:08:34.989 lat (usec): min=230, max=828, avg=297.81, stdev=60.43 00:08:34.989 clat percentiles (usec): 00:08:34.989 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 241], 00:08:34.989 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 269], 00:08:34.989 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 326], 95.00th=[ 351], 00:08:34.989 | 99.00th=[ 529], 99.50th=[ 701], 99.90th=[ 816], 99.95th=[ 816], 00:08:34.989 | 99.99th=[ 816] 00:08:34.989 bw ( KiB/s): min= 4096, max= 4096, per=34.07%, avg=4096.00, stdev= 0.00, samples=1 00:08:34.989 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:34.989 lat (usec) : 250=27.44%, 500=66.84%, 750=2.02%, 1000=0.34% 00:08:34.989 lat (msec) : 50=3.37% 00:08:34.989 cpu : usr=0.80%, sys=2.00%, ctx=595, majf=0, minf=1 00:08:34.989 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:34.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.989 issued rwts: total=82,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:34.989 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:34.989 job3: (groupid=0, jobs=1): err= 0: pid=2508283: Wed Jul 24 19:05:40 2024 00:08:34.989 read: IOPS=22, BW=90.0KiB/s (92.2kB/s)(92.0KiB/1022msec) 00:08:34.989 slat (nsec): min=16001, max=48216, avg=30390.74, stdev=9746.77 00:08:34.989 clat (usec): min=419, max=41033, avg=37039.76, stdev=11682.41 00:08:34.989 lat (usec): min=452, max=41050, avg=37070.15, stdev=11681.22 00:08:34.989 clat percentiles (usec): 00:08:34.989 | 1.00th=[ 420], 5.00th=[ 570], 10.00th=[31851], 20.00th=[40633], 00:08:34.989 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:34.989 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:34.989 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:34.989 | 99.99th=[41157] 00:08:34.989 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:08:34.989 slat (nsec): min=8997, max=49882, avg=23433.54, stdev=6094.31 00:08:34.989 clat (usec): min=192, max=832, avg=299.41, stdev=76.11 00:08:34.989 lat (usec): min=211, max=843, avg=322.84, stdev=77.71 00:08:34.989 clat percentiles (usec): 00:08:34.989 | 1.00th=[ 198], 5.00th=[ 208], 10.00th=[ 221], 20.00th=[ 235], 00:08:34.989 | 30.00th=[ 247], 40.00th=[ 262], 50.00th=[ 277], 60.00th=[ 306], 00:08:34.989 | 70.00th=[ 343], 80.00th=[ 371], 90.00th=[ 404], 95.00th=[ 441], 00:08:34.989 | 99.00th=[ 482], 99.50th=[ 506], 99.90th=[ 832], 99.95th=[ 832], 00:08:34.989 | 99.99th=[ 832] 00:08:34.989 bw ( KiB/s): min= 4096, max= 4096, per=34.07%, avg=4096.00, stdev= 0.00, samples=1 00:08:34.989 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:34.989 lat (usec) : 250=31.40%, 500=63.93%, 750=0.56%, 1000=0.19% 00:08:34.989 lat (msec) : 50=3.93% 00:08:34.989 cpu : usr=0.88%, sys=1.57%, ctx=536, majf=0, minf=1 00:08:34.989 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:34.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.989 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:34.989 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:34.989 00:08:34.989 Run status group 0 (all jobs): 00:08:34.989 READ: bw=5883KiB/s (6024kB/s), 90.0KiB/s-5467KiB/s (92.2kB/s-5598kB/s), io=6012KiB (6156kB), run=1001-1022msec 00:08:34.989 WRITE: bw=11.7MiB/s (12.3MB/s), 2004KiB/s-6138KiB/s (2052kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1022msec 00:08:34.989 00:08:34.989 Disk stats (read/write): 00:08:34.989 nvme0n1: ios=1062/1257, merge=0/0, ticks=692/274, in_queue=966, util=99.70% 00:08:34.989 nvme0n2: ios=72/512, merge=0/0, ticks=1351/137, in_queue=1488, util=96.75% 00:08:34.989 nvme0n3: ios=123/512, merge=0/0, ticks=1580/130, in_queue=1710, util=100.00% 00:08:34.989 nvme0n4: ios=65/512, merge=0/0, ticks=1004/146, in_queue=1150, util=98.43% 00:08:34.989 19:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:34.989 [global] 00:08:34.989 thread=1 00:08:34.989 invalidate=1 00:08:34.989 rw=write 00:08:34.989 time_based=1 00:08:34.989 runtime=1 00:08:34.989 ioengine=libaio 00:08:34.989 direct=1 00:08:34.989 bs=4096 00:08:34.989 iodepth=128 00:08:34.989 norandommap=0 00:08:34.989 numjobs=1 00:08:34.989 00:08:34.989 verify_dump=1 00:08:34.989 verify_backlog=512 00:08:34.989 verify_state_save=0 00:08:34.989 do_verify=1 00:08:34.989 verify=crc32c-intel 00:08:34.989 [job0] 00:08:34.989 filename=/dev/nvme0n1 00:08:34.989 [job1] 00:08:34.989 filename=/dev/nvme0n2 00:08:34.989 [job2] 00:08:34.989 filename=/dev/nvme0n3 00:08:34.989 [job3] 00:08:34.989 filename=/dev/nvme0n4 00:08:34.989 Could not set queue depth (nvme0n1) 00:08:34.989 Could not set queue depth (nvme0n2) 00:08:34.989 Could not set queue depth (nvme0n3) 00:08:34.989 Could not set queue depth (nvme0n4) 00:08:35.268 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:35.268 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:35.268 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:35.268 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:35.268 fio-3.35 00:08:35.268 Starting 4 threads 00:08:36.656 00:08:36.656 job0: (groupid=0, jobs=1): err= 0: pid=2508534: Wed Jul 24 19:05:42 2024 00:08:36.656 read: IOPS=4113, BW=16.1MiB/s (16.8MB/s)(16.1MiB/1004msec) 00:08:36.656 slat (usec): min=2, max=23635, avg=116.81, stdev=797.11 00:08:36.656 clat (usec): min=3343, max=45120, avg=14727.22, stdev=5756.70 00:08:36.656 lat (usec): min=4032, max=45125, avg=14844.04, stdev=5799.66 00:08:36.656 clat percentiles (usec): 00:08:36.656 | 1.00th=[ 5473], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[11600], 00:08:36.656 | 30.00th=[11994], 40.00th=[12780], 50.00th=[13435], 60.00th=[13829], 00:08:36.656 | 70.00th=[14091], 80.00th=[15533], 90.00th=[22938], 95.00th=[28443], 00:08:36.656 | 99.00th=[36963], 99.50th=[37487], 99.90th=[40633], 99.95th=[45351], 00:08:36.656 | 99.99th=[45351] 00:08:36.656 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:08:36.656 slat (usec): min=4, max=15640, avg=102.91, stdev=640.28 00:08:36.656 clat (usec): min=4051, max=43075, avg=14341.87, stdev=5622.12 00:08:36.656 lat (usec): min=4841, max=43085, avg=14444.78, stdev=5649.20 00:08:36.656 clat percentiles (usec): 00:08:36.656 | 1.00th=[ 6980], 5.00th=[ 9241], 10.00th=[10945], 20.00th=[11207], 00:08:36.656 | 30.00th=[11600], 40.00th=[12256], 50.00th=[13829], 60.00th=[14091], 00:08:36.656 | 70.00th=[14353], 80.00th=[15008], 90.00th=[18220], 95.00th=[28967], 00:08:36.656 | 99.00th=[40109], 99.50th=[40109], 99.90th=[43254], 99.95th=[43254], 00:08:36.656 | 99.99th=[43254] 00:08:36.656 bw ( KiB/s): min=16384, max=19736, per=27.40%, avg=18060.00, stdev=2370.22, samples=2 00:08:36.656 iops : min= 4096, max= 4934, avg=4515.00, stdev=592.56, samples=2 00:08:36.656 lat (msec) : 4=0.01%, 10=7.35%, 20=81.63%, 50=11.01% 00:08:36.656 cpu : usr=5.68%, sys=8.08%, ctx=357, majf=0, minf=7 00:08:36.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:08:36.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:36.656 issued rwts: total=4130,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:36.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:36.656 job1: (groupid=0, jobs=1): err= 0: pid=2508535: Wed Jul 24 19:05:42 2024 00:08:36.656 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:08:36.656 slat (usec): min=2, max=16901, avg=123.96, stdev=769.19 00:08:36.656 clat (usec): min=7433, max=63923, avg=15897.76, stdev=7869.75 00:08:36.656 lat (usec): min=7439, max=63938, avg=16021.72, stdev=7936.25 00:08:36.656 clat percentiles (usec): 00:08:36.656 | 1.00th=[ 7570], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[11207], 00:08:36.656 | 30.00th=[11731], 40.00th=[12518], 50.00th=[12911], 60.00th=[13566], 00:08:36.656 | 70.00th=[15664], 80.00th=[20317], 90.00th=[23987], 95.00th=[31065], 00:08:36.656 | 99.00th=[52167], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:08:36.656 | 99.99th=[63701] 00:08:36.656 write: IOPS=4234, BW=16.5MiB/s (17.3MB/s)(16.6MiB/1006msec); 0 zone resets 00:08:36.656 slat (usec): min=4, max=13524, avg=102.51, stdev=605.59 00:08:36.656 clat (usec): min=397, max=42301, avg=14629.58, stdev=6451.41 00:08:36.656 lat (usec): min=422, max=42311, avg=14732.09, stdev=6490.99 00:08:36.656 clat percentiles (usec): 00:08:36.656 | 1.00th=[ 2966], 5.00th=[ 6783], 10.00th=[ 9634], 20.00th=[11338], 00:08:36.657 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12387], 60.00th=[12780], 00:08:36.657 | 70.00th=[14353], 80.00th=[19268], 90.00th=[22938], 95.00th=[28181], 00:08:36.657 | 99.00th=[36439], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:36.657 | 99.99th=[42206] 00:08:36.657 bw ( KiB/s): min=12904, max=20111, per=25.05%, avg=16507.50, stdev=5096.12, samples=2 00:08:36.657 iops : min= 3226, max= 5027, avg=4126.50, stdev=1273.50, samples=2 00:08:36.657 lat (usec) : 500=0.04%, 1000=0.08% 00:08:36.657 lat (msec) : 2=0.10%, 4=0.69%, 10=8.20%, 20=70.97%, 50=19.39% 00:08:36.657 lat (msec) : 100=0.54% 00:08:36.657 cpu : usr=5.67%, sys=8.76%, ctx=447, majf=0, minf=13 00:08:36.657 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:08:36.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:36.657 issued rwts: total=4096,4260,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:36.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:36.657 job2: (groupid=0, jobs=1): err= 0: pid=2508536: Wed Jul 24 19:05:42 2024 00:08:36.657 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:08:36.657 slat (usec): min=2, max=9306, avg=127.16, stdev=765.83 00:08:36.657 clat (usec): min=6699, max=35136, avg=16678.08, stdev=5506.15 00:08:36.657 lat (usec): min=6708, max=35151, avg=16805.24, stdev=5550.12 00:08:36.657 clat percentiles (usec): 00:08:36.657 | 1.00th=[ 7701], 5.00th=[11600], 10.00th=[12387], 20.00th=[13566], 00:08:36.657 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14746], 60.00th=[15139], 00:08:36.657 | 70.00th=[16188], 80.00th=[17957], 90.00th=[27919], 95.00th=[29754], 00:08:36.657 | 99.00th=[33424], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:08:36.657 | 99.99th=[35390] 00:08:36.657 write: IOPS=3658, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1006msec); 0 zone resets 00:08:36.657 slat (usec): min=4, max=11761, avg=137.16, stdev=801.93 00:08:36.657 clat (usec): min=2576, max=68735, avg=18015.08, stdev=9148.57 00:08:36.657 lat (usec): min=2896, max=68755, avg=18152.24, stdev=9198.61 00:08:36.657 clat percentiles (usec): 00:08:36.657 | 1.00th=[ 3785], 5.00th=[ 9896], 10.00th=[11338], 20.00th=[13173], 00:08:36.657 | 30.00th=[13829], 40.00th=[14353], 50.00th=[14877], 60.00th=[15401], 00:08:36.657 | 70.00th=[17957], 80.00th=[22414], 90.00th=[29492], 95.00th=[33424], 00:08:36.657 | 99.00th=[60556], 99.50th=[62653], 99.90th=[66847], 99.95th=[68682], 00:08:36.657 | 99.99th=[68682] 00:08:36.657 bw ( KiB/s): min=12288, max=16384, per=21.75%, avg=14336.00, stdev=2896.31, samples=2 00:08:36.657 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:08:36.657 lat (msec) : 4=0.52%, 10=3.04%, 20=76.31%, 50=19.04%, 100=1.09% 00:08:36.657 cpu : usr=4.88%, sys=8.06%, ctx=293, majf=0, minf=21 00:08:36.657 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:08:36.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:36.657 issued rwts: total=3584,3680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:36.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:36.657 job3: (groupid=0, jobs=1): err= 0: pid=2508537: Wed Jul 24 19:05:42 2024 00:08:36.657 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:08:36.657 slat (usec): min=2, max=12908, avg=132.13, stdev=787.32 00:08:36.657 clat (usec): min=6724, max=78158, avg=17935.16, stdev=11019.55 00:08:36.657 lat (usec): min=6729, max=78171, avg=18067.29, stdev=11055.32 00:08:36.657 clat percentiles (usec): 00:08:36.657 | 1.00th=[ 8455], 5.00th=[10945], 10.00th=[11863], 20.00th=[13173], 00:08:36.657 | 30.00th=[13435], 40.00th=[14091], 50.00th=[15401], 60.00th=[15926], 00:08:36.657 | 70.00th=[16909], 80.00th=[18482], 90.00th=[23987], 95.00th=[31851], 00:08:36.657 | 99.00th=[71828], 99.50th=[76022], 99.90th=[76022], 99.95th=[78119], 00:08:36.657 | 99.99th=[78119] 00:08:36.657 write: IOPS=4005, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1005msec); 0 zone resets 00:08:36.657 slat (usec): min=4, max=13720, avg=122.22, stdev=697.02 00:08:36.657 clat (usec): min=3446, max=58531, avg=15554.07, stdev=4676.29 00:08:36.657 lat (usec): min=4247, max=58588, avg=15676.29, stdev=4750.41 00:08:36.657 clat percentiles (usec): 00:08:36.657 | 1.00th=[ 7635], 5.00th=[11994], 10.00th=[12780], 20.00th=[13173], 00:08:36.657 | 30.00th=[13566], 40.00th=[14091], 50.00th=[14746], 60.00th=[15008], 00:08:36.657 | 70.00th=[15401], 80.00th=[16581], 90.00th=[20055], 95.00th=[22676], 00:08:36.657 | 99.00th=[29754], 99.50th=[50594], 99.90th=[50594], 99.95th=[58459], 00:08:36.657 | 99.99th=[58459] 00:08:36.657 bw ( KiB/s): min=15056, max=16136, per=23.67%, avg=15596.00, stdev=763.68, samples=2 00:08:36.657 iops : min= 3764, max= 4034, avg=3899.00, stdev=190.92, samples=2 00:08:36.657 lat (msec) : 4=0.01%, 10=2.73%, 20=84.52%, 50=10.72%, 100=2.01% 00:08:36.657 cpu : usr=4.58%, sys=6.57%, ctx=451, majf=0, minf=9 00:08:36.657 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:08:36.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:36.657 issued rwts: total=3584,4026,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:36.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:36.657 00:08:36.657 Run status group 0 (all jobs): 00:08:36.657 READ: bw=59.8MiB/s (62.7MB/s), 13.9MiB/s-16.1MiB/s (14.6MB/s-16.8MB/s), io=60.1MiB (63.1MB), run=1004-1006msec 00:08:36.657 WRITE: bw=64.4MiB/s (67.5MB/s), 14.3MiB/s-17.9MiB/s (15.0MB/s-18.8MB/s), io=64.7MiB (67.9MB), run=1004-1006msec 00:08:36.657 00:08:36.657 Disk stats (read/write): 00:08:36.657 nvme0n1: ios=3491/3584, merge=0/0, ticks=21342/21365, in_queue=42707, util=85.97% 00:08:36.657 nvme0n2: ios=3627/3902, merge=0/0, ticks=20815/20624, in_queue=41439, util=99.29% 00:08:36.657 nvme0n3: ios=3115/3515, merge=0/0, ticks=21808/27875, in_queue=49683, util=96.56% 00:08:36.657 nvme0n4: ios=3104/3072, merge=0/0, ticks=19309/16389, in_queue=35698, util=97.48% 00:08:36.657 19:05:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:36.657 [global] 00:08:36.657 thread=1 00:08:36.657 invalidate=1 00:08:36.657 rw=randwrite 00:08:36.657 time_based=1 00:08:36.657 runtime=1 00:08:36.657 ioengine=libaio 00:08:36.657 direct=1 00:08:36.657 bs=4096 00:08:36.657 iodepth=128 00:08:36.657 norandommap=0 00:08:36.657 numjobs=1 00:08:36.657 00:08:36.657 verify_dump=1 00:08:36.657 verify_backlog=512 00:08:36.657 verify_state_save=0 00:08:36.657 do_verify=1 00:08:36.657 verify=crc32c-intel 00:08:36.657 [job0] 00:08:36.657 filename=/dev/nvme0n1 00:08:36.657 [job1] 00:08:36.657 filename=/dev/nvme0n2 00:08:36.657 [job2] 00:08:36.657 filename=/dev/nvme0n3 00:08:36.657 [job3] 00:08:36.657 filename=/dev/nvme0n4 00:08:36.657 Could not set queue depth (nvme0n1) 00:08:36.657 Could not set queue depth (nvme0n2) 00:08:36.657 Could not set queue depth (nvme0n3) 00:08:36.657 Could not set queue depth (nvme0n4) 00:08:36.657 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:36.657 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:36.657 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:36.657 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:36.657 fio-3.35 00:08:36.657 Starting 4 threads 00:08:38.042 00:08:38.042 job0: (groupid=0, jobs=1): err= 0: pid=2508720: Wed Jul 24 19:05:43 2024 00:08:38.042 read: IOPS=3084, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1006msec) 00:08:38.042 slat (usec): min=2, max=27648, avg=146.38, stdev=1039.38 00:08:38.042 clat (usec): min=4348, max=79383, avg=17559.16, stdev=9718.91 00:08:38.042 lat (usec): min=5908, max=79415, avg=17705.54, stdev=9802.65 00:08:38.042 clat percentiles (usec): 00:08:38.042 | 1.00th=[ 9241], 5.00th=[10945], 10.00th=[11076], 20.00th=[12125], 00:08:38.042 | 30.00th=[12518], 40.00th=[14484], 50.00th=[16057], 60.00th=[16712], 00:08:38.042 | 70.00th=[17695], 80.00th=[19530], 90.00th=[23462], 95.00th=[28967], 00:08:38.042 | 99.00th=[73925], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:08:38.042 | 99.99th=[79168] 00:08:38.042 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:08:38.042 slat (usec): min=4, max=21769, avg=141.74, stdev=874.34 00:08:38.042 clat (usec): min=6111, max=86629, avg=20236.61, stdev=10137.85 00:08:38.042 lat (usec): min=6619, max=86700, avg=20378.35, stdev=10226.53 00:08:38.042 clat percentiles (usec): 00:08:38.042 | 1.00th=[ 8717], 5.00th=[10290], 10.00th=[11207], 20.00th=[13304], 00:08:38.042 | 30.00th=[13829], 40.00th=[15401], 50.00th=[19006], 60.00th=[20055], 00:08:38.042 | 70.00th=[22152], 80.00th=[23725], 90.00th=[29492], 95.00th=[48497], 00:08:38.042 | 99.00th=[58983], 99.50th=[58983], 99.90th=[71828], 99.95th=[73925], 00:08:38.042 | 99.99th=[86508] 00:08:38.042 bw ( KiB/s): min=11656, max=16240, per=25.25%, avg=13948.00, stdev=3241.38, samples=2 00:08:38.042 iops : min= 2914, max= 4060, avg=3487.00, stdev=810.34, samples=2 00:08:38.042 lat (msec) : 10=3.22%, 20=66.20%, 50=28.01%, 100=2.57% 00:08:38.042 cpu : usr=4.68%, sys=7.56%, ctx=327, majf=0, minf=13 00:08:38.042 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:08:38.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:38.042 issued rwts: total=3103,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:38.042 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:38.042 job1: (groupid=0, jobs=1): err= 0: pid=2508726: Wed Jul 24 19:05:43 2024 00:08:38.042 read: IOPS=3618, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1005msec) 00:08:38.042 slat (usec): min=2, max=12700, avg=136.06, stdev=870.68 00:08:38.042 clat (usec): min=3850, max=44975, avg=16567.98, stdev=6898.32 00:08:38.042 lat (usec): min=3921, max=45009, avg=16704.04, stdev=6969.61 00:08:38.042 clat percentiles (usec): 00:08:38.042 | 1.00th=[ 4359], 5.00th=[ 8717], 10.00th=[10290], 20.00th=[11338], 00:08:38.042 | 30.00th=[11994], 40.00th=[13435], 50.00th=[14091], 60.00th=[15533], 00:08:38.042 | 70.00th=[18744], 80.00th=[23200], 90.00th=[27657], 95.00th=[28967], 00:08:38.042 | 99.00th=[36439], 99.50th=[39060], 99.90th=[39060], 99.95th=[40633], 00:08:38.042 | 99.99th=[44827] 00:08:38.042 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:08:38.042 slat (usec): min=4, max=10782, avg=113.62, stdev=696.80 00:08:38.042 clat (usec): min=352, max=56105, avg=16110.92, stdev=8533.95 00:08:38.042 lat (usec): min=678, max=56123, avg=16224.55, stdev=8585.59 00:08:38.042 clat percentiles (usec): 00:08:38.042 | 1.00th=[ 2900], 5.00th=[ 7570], 10.00th=[ 9503], 20.00th=[10421], 00:08:38.042 | 30.00th=[11076], 40.00th=[12125], 50.00th=[12780], 60.00th=[14746], 00:08:38.042 | 70.00th=[17171], 80.00th=[21890], 90.00th=[28181], 95.00th=[33424], 00:08:38.042 | 99.00th=[47973], 99.50th=[50070], 99.90th=[55837], 99.95th=[55837], 00:08:38.042 | 99.99th=[56361] 00:08:38.042 bw ( KiB/s): min=12312, max=19888, per=29.14%, avg=16100.00, stdev=5357.04, samples=2 00:08:38.042 iops : min= 3078, max= 4972, avg=4025.00, stdev=1339.26, samples=2 00:08:38.042 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.30% 00:08:38.042 lat (msec) : 4=0.71%, 10=12.92%, 20=60.82%, 50=24.87%, 100=0.36% 00:08:38.042 cpu : usr=3.78%, sys=6.08%, ctx=304, majf=0, minf=19 00:08:38.042 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:08:38.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:38.042 issued rwts: total=3637,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:38.042 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:38.042 job2: (groupid=0, jobs=1): err= 0: pid=2508728: Wed Jul 24 19:05:43 2024 00:08:38.042 read: IOPS=2343, BW=9373KiB/s (9598kB/s)(9392KiB/1002msec) 00:08:38.042 slat (usec): min=3, max=10114, avg=167.42, stdev=829.73 00:08:38.042 clat (usec): min=1019, max=43137, avg=20456.45, stdev=7623.14 00:08:38.042 lat (usec): min=1034, max=48665, avg=20623.87, stdev=7649.89 00:08:38.042 clat percentiles (usec): 00:08:38.042 | 1.00th=[ 5473], 5.00th=[12518], 10.00th=[13829], 20.00th=[14615], 00:08:38.042 | 30.00th=[14877], 40.00th=[15533], 50.00th=[16909], 60.00th=[21365], 00:08:38.042 | 70.00th=[25297], 80.00th=[28443], 90.00th=[31327], 95.00th=[34341], 00:08:38.042 | 99.00th=[40109], 99.50th=[40633], 99.90th=[42206], 99.95th=[42730], 00:08:38.042 | 99.99th=[43254] 00:08:38.042 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:08:38.042 slat (usec): min=5, max=15400, avg=224.10, stdev=1073.48 00:08:38.042 clat (msec): min=12, max=138, avg=30.68, stdev=23.48 00:08:38.042 lat (msec): min=12, max=139, avg=30.91, stdev=23.63 00:08:38.042 clat percentiles (msec): 00:08:38.042 | 1.00th=[ 13], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 17], 00:08:38.042 | 30.00th=[ 20], 40.00th=[ 21], 50.00th=[ 22], 60.00th=[ 27], 00:08:38.042 | 70.00th=[ 30], 80.00th=[ 35], 90.00th=[ 46], 95.00th=[ 92], 00:08:38.042 | 99.00th=[ 126], 99.50th=[ 136], 99.90th=[ 140], 99.95th=[ 140], 00:08:38.042 | 99.99th=[ 140] 00:08:38.042 bw ( KiB/s): min= 8192, max=12288, per=18.53%, avg=10240.00, stdev=2896.31, samples=2 00:08:38.042 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:08:38.042 lat (msec) : 2=0.06%, 4=0.20%, 10=1.30%, 20=43.89%, 50=49.67% 00:08:38.042 lat (msec) : 100=2.77%, 250=2.10% 00:08:38.043 cpu : usr=4.20%, sys=6.59%, ctx=305, majf=0, minf=9 00:08:38.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:08:38.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:38.043 issued rwts: total=2348,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:38.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:38.043 job3: (groupid=0, jobs=1): err= 0: pid=2508730: Wed Jul 24 19:05:43 2024 00:08:38.043 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:08:38.043 slat (usec): min=4, max=8059, avg=131.56, stdev=669.19 00:08:38.043 clat (usec): min=10902, max=27799, avg=17159.99, stdev=2902.74 00:08:38.043 lat (usec): min=10916, max=27837, avg=17291.54, stdev=2950.78 00:08:38.043 clat percentiles (usec): 00:08:38.043 | 1.00th=[11731], 5.00th=[13698], 10.00th=[14222], 20.00th=[14746], 00:08:38.043 | 30.00th=[15270], 40.00th=[15795], 50.00th=[16319], 60.00th=[17171], 00:08:38.043 | 70.00th=[18744], 80.00th=[20055], 90.00th=[21103], 95.00th=[21627], 00:08:38.043 | 99.00th=[25297], 99.50th=[25560], 99.90th=[26870], 99.95th=[27132], 00:08:38.043 | 99.99th=[27919] 00:08:38.043 write: IOPS=3644, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1003msec); 0 zone resets 00:08:38.043 slat (usec): min=5, max=10142, avg=131.93, stdev=679.74 00:08:38.043 clat (usec): min=583, max=30410, avg=17632.57, stdev=2934.46 00:08:38.043 lat (usec): min=4622, max=30460, avg=17764.50, stdev=2983.70 00:08:38.043 clat percentiles (usec): 00:08:38.043 | 1.00th=[ 5538], 5.00th=[14746], 10.00th=[15270], 20.00th=[16057], 00:08:38.043 | 30.00th=[16319], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:08:38.043 | 70.00th=[18744], 80.00th=[19530], 90.00th=[21365], 95.00th=[23200], 00:08:38.043 | 99.00th=[25822], 99.50th=[25822], 99.90th=[28705], 99.95th=[29492], 00:08:38.043 | 99.99th=[30540] 00:08:38.043 bw ( KiB/s): min=13936, max=14736, per=25.95%, avg=14336.00, stdev=565.69, samples=2 00:08:38.043 iops : min= 3484, max= 3684, avg=3584.00, stdev=141.42, samples=2 00:08:38.043 lat (usec) : 750=0.01% 00:08:38.043 lat (msec) : 10=0.91%, 20=79.02%, 50=20.06% 00:08:38.043 cpu : usr=5.89%, sys=9.48%, ctx=344, majf=0, minf=11 00:08:38.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:08:38.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:38.043 issued rwts: total=3584,3655,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:38.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:38.043 00:08:38.043 Run status group 0 (all jobs): 00:08:38.043 READ: bw=49.2MiB/s (51.6MB/s), 9373KiB/s-14.1MiB/s (9598kB/s-14.8MB/s), io=49.5MiB (51.9MB), run=1002-1006msec 00:08:38.043 WRITE: bw=54.0MiB/s (56.6MB/s), 9.98MiB/s-15.9MiB/s (10.5MB/s-16.7MB/s), io=54.3MiB (56.9MB), run=1002-1006msec 00:08:38.043 00:08:38.043 Disk stats (read/write): 00:08:38.043 nvme0n1: ios=2610/2784, merge=0/0, ticks=19971/22037, in_queue=42008, util=87.27% 00:08:38.043 nvme0n2: ios=3096/3319, merge=0/0, ticks=25594/24792, in_queue=50386, util=100.00% 00:08:38.043 nvme0n3: ios=1791/2048, merge=0/0, ticks=10286/16159, in_queue=26445, util=88.96% 00:08:38.043 nvme0n4: ios=2997/3072, merge=0/0, ticks=17491/17302, in_queue=34793, util=98.32% 00:08:38.043 19:05:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:38.043 19:05:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2508833 00:08:38.043 19:05:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:38.043 19:05:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:38.043 [global] 00:08:38.043 thread=1 00:08:38.043 invalidate=1 00:08:38.043 rw=read 00:08:38.043 time_based=1 00:08:38.043 runtime=10 00:08:38.043 ioengine=libaio 00:08:38.043 direct=1 00:08:38.043 bs=4096 00:08:38.043 iodepth=1 00:08:38.043 norandommap=1 00:08:38.043 numjobs=1 00:08:38.043 00:08:38.043 [job0] 00:08:38.043 filename=/dev/nvme0n1 00:08:38.043 [job1] 00:08:38.043 filename=/dev/nvme0n2 00:08:38.043 [job2] 00:08:38.043 filename=/dev/nvme0n3 00:08:38.043 [job3] 00:08:38.043 filename=/dev/nvme0n4 00:08:38.043 Could not set queue depth (nvme0n1) 00:08:38.043 Could not set queue depth (nvme0n2) 00:08:38.043 Could not set queue depth (nvme0n3) 00:08:38.043 Could not set queue depth (nvme0n4) 00:08:38.043 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:38.043 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:38.043 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:38.043 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:38.043 fio-3.35 00:08:38.043 Starting 4 threads 00:08:41.334 19:05:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:41.334 19:05:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:41.334 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=33804288, buflen=4096 00:08:41.334 fio: pid=2508911, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:08:41.334 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=13766656, buflen=4096 00:08:41.334 fio: pid=2508910, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:08:41.334 19:05:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:41.334 19:05:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:41.903 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=22769664, buflen=4096 00:08:41.903 fio: pid=2508908, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:08:41.903 19:05:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:41.903 19:05:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:42.162 19:05:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:42.162 19:05:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:42.162 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=32976896, buflen=4096 00:08:42.162 fio: pid=2508909, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:08:42.162 00:08:42.162 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2508908: Wed Jul 24 19:05:47 2024 00:08:42.162 read: IOPS=1575, BW=6303KiB/s (6454kB/s)(21.7MiB/3528msec) 00:08:42.162 slat (usec): min=4, max=14921, avg=18.25, stdev=317.83 00:08:42.162 clat (usec): min=210, max=42234, avg=609.83, stdev=3395.51 00:08:42.162 lat (usec): min=217, max=42240, avg=628.08, stdev=3410.91 00:08:42.162 clat percentiles (usec): 00:08:42.162 | 1.00th=[ 225], 5.00th=[ 235], 10.00th=[ 243], 20.00th=[ 255], 00:08:42.162 | 30.00th=[ 269], 40.00th=[ 293], 50.00th=[ 318], 60.00th=[ 334], 00:08:42.162 | 70.00th=[ 351], 80.00th=[ 375], 90.00th=[ 420], 95.00th=[ 482], 00:08:42.162 | 99.00th=[ 750], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:42.162 | 99.99th=[42206] 00:08:42.162 bw ( KiB/s): min= 96, max=11568, per=19.99%, avg=5260.00, stdev=5547.07, samples=6 00:08:42.162 iops : min= 24, max= 2892, avg=1315.00, stdev=1386.77, samples=6 00:08:42.162 lat (usec) : 250=15.85%, 500=80.47%, 750=2.68%, 1000=0.22% 00:08:42.162 lat (msec) : 2=0.05%, 10=0.02%, 50=0.70% 00:08:42.162 cpu : usr=0.96%, sys=1.87%, ctx=5569, majf=0, minf=1 00:08:42.162 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:42.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.162 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.162 issued rwts: total=5560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.162 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:42.162 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2508909: Wed Jul 24 19:05:47 2024 00:08:42.162 read: IOPS=2099, BW=8397KiB/s (8599kB/s)(31.4MiB/3835msec) 00:08:42.162 slat (usec): min=5, max=15156, avg=17.90, stdev=285.41 00:08:42.162 clat (usec): min=210, max=42216, avg=452.12, stdev=2396.63 00:08:42.162 lat (usec): min=216, max=42222, avg=470.03, stdev=2414.17 00:08:42.162 clat percentiles (usec): 00:08:42.162 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 237], 20.00th=[ 277], 00:08:42.162 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 310], 60.00th=[ 322], 00:08:42.162 | 70.00th=[ 334], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 375], 00:08:42.162 | 99.00th=[ 494], 99.50th=[ 881], 99.90th=[41157], 99.95th=[41157], 00:08:42.162 | 99.99th=[42206] 00:08:42.162 bw ( KiB/s): min= 96, max=12593, per=30.25%, avg=7959.00, stdev=5393.12, samples=7 00:08:42.162 iops : min= 24, max= 3148, avg=1989.71, stdev=1348.24, samples=7 00:08:42.162 lat (usec) : 250=12.62%, 500=86.43%, 750=0.34%, 1000=0.16% 00:08:42.162 lat (msec) : 2=0.09%, 4=0.01%, 50=0.35% 00:08:42.162 cpu : usr=1.93%, sys=3.10%, ctx=8059, majf=0, minf=1 00:08:42.162 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:42.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.162 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.162 issued rwts: total=8052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.162 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:42.162 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2508910: Wed Jul 24 19:05:47 2024 00:08:42.162 read: IOPS=1042, BW=4169KiB/s (4269kB/s)(13.1MiB/3225msec) 00:08:42.162 slat (nsec): min=5048, max=50777, avg=10718.59, stdev=5448.97 00:08:42.162 clat (usec): min=225, max=41419, avg=939.37, stdev=4892.56 00:08:42.162 lat (usec): min=231, max=41428, avg=950.08, stdev=4894.03 00:08:42.162 clat percentiles (usec): 00:08:42.162 | 1.00th=[ 247], 5.00th=[ 262], 10.00th=[ 273], 20.00th=[ 289], 00:08:42.162 | 30.00th=[ 306], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 338], 00:08:42.162 | 70.00th=[ 355], 80.00th=[ 383], 90.00th=[ 420], 95.00th=[ 486], 00:08:42.162 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:42.162 | 99.99th=[41681] 00:08:42.162 bw ( KiB/s): min= 104, max=12088, per=13.80%, avg=3630.67, stdev=4667.53, samples=6 00:08:42.162 iops : min= 26, max= 3022, avg=907.67, stdev=1166.88, samples=6 00:08:42.162 lat (usec) : 250=1.61%, 500=95.36%, 750=1.43%, 1000=0.03% 00:08:42.162 lat (msec) : 2=0.03%, 10=0.03%, 50=1.49% 00:08:42.162 cpu : usr=0.59%, sys=1.80%, ctx=3362, majf=0, minf=1 00:08:42.162 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:42.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.162 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.162 issued rwts: total=3362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.162 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:42.162 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2508911: Wed Jul 24 19:05:47 2024 00:08:42.162 read: IOPS=2817, BW=11.0MiB/s (11.5MB/s)(32.2MiB/2930msec) 00:08:42.162 slat (nsec): min=4790, max=55241, avg=10891.00, stdev=5019.40 00:08:42.162 clat (usec): min=227, max=42198, avg=338.87, stdev=914.36 00:08:42.162 lat (usec): min=237, max=42204, avg=349.76, stdev=914.39 00:08:42.162 clat percentiles (usec): 00:08:42.162 | 1.00th=[ 243], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 265], 00:08:42.162 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 318], 00:08:42.162 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 392], 95.00th=[ 478], 00:08:42.162 | 99.00th=[ 537], 99.50th=[ 562], 99.90th=[ 1188], 99.95th=[11731], 00:08:42.162 | 99.99th=[42206] 00:08:42.162 bw ( KiB/s): min= 9096, max=12288, per=42.19%, avg=11099.20, stdev=1259.70, samples=5 00:08:42.162 iops : min= 2274, max= 3072, avg=2774.80, stdev=314.92, samples=5 00:08:42.162 lat (usec) : 250=4.49%, 500=92.89%, 750=2.41%, 1000=0.06% 00:08:42.162 lat (msec) : 2=0.06%, 4=0.01%, 20=0.01%, 50=0.05% 00:08:42.162 cpu : usr=1.57%, sys=4.40%, ctx=8254, majf=0, minf=1 00:08:42.162 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:42.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.162 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.162 issued rwts: total=8254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.162 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:42.162 00:08:42.162 Run status group 0 (all jobs): 00:08:42.162 READ: bw=25.7MiB/s (26.9MB/s), 4169KiB/s-11.0MiB/s (4269kB/s-11.5MB/s), io=98.5MiB (103MB), run=2930-3835msec 00:08:42.162 00:08:42.162 Disk stats (read/write): 00:08:42.162 nvme0n1: ios=5112/0, merge=0/0, ticks=3745/0, in_queue=3745, util=98.66% 00:08:42.162 nvme0n2: ios=7279/0, merge=0/0, ticks=3382/0, in_queue=3382, util=95.53% 00:08:42.162 nvme0n3: ios=3048/0, merge=0/0, ticks=3043/0, in_queue=3043, util=96.76% 00:08:42.162 nvme0n4: ios=8089/0, merge=0/0, ticks=2686/0, in_queue=2686, util=96.75% 00:08:42.421 19:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:42.421 19:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:08:42.679 19:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:42.680 19:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:08:42.938 19:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:42.938 19:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:08:43.196 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:43.196 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:08:43.455 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:08:43.455 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2508833 00:08:43.455 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:08:43.455 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:43.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.714 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:43.714 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:08:43.714 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:43.714 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:43.714 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:43.714 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:43.714 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:08:43.714 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:08:43.714 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:08:43.714 nvmf hotplug test: fio failed as expected 00:08:43.714 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:43.974 rmmod nvme_tcp 00:08:43.974 rmmod nvme_fabrics 00:08:43.974 rmmod nvme_keyring 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2507244 ']' 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2507244 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2507244 ']' 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2507244 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2507244 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2507244' 00:08:43.974 killing process with pid 2507244 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2507244 00:08:43.974 19:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2507244 00:08:44.234 19:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:44.234 19:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:44.234 19:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:44.234 19:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:44.234 19:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:44.234 19:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.234 19:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.234 19:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:46.772 00:08:46.772 real 0m23.492s 00:08:46.772 user 1m21.505s 00:08:46.772 sys 0m7.356s 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:46.772 ************************************ 00:08:46.772 END TEST nvmf_fio_target 00:08:46.772 ************************************ 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:46.772 ************************************ 00:08:46.772 START TEST nvmf_bdevio 00:08:46.772 ************************************ 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:46.772 * Looking for test storage... 00:08:46.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:46.772 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:08:46.773 19:05:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:48.149 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.149 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:48.150 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:48.150 Found net devices under 0000:08:00.0: cvl_0_0 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:48.150 Found net devices under 0000:08:00.1: cvl_0_1 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:48.150 19:05:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:48.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:08:48.150 00:08:48.150 --- 10.0.0.2 ping statistics --- 00:08:48.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.150 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:08:48.150 00:08:48.150 --- 10.0.0.1 ping statistics --- 00:08:48.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.150 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2511027 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2511027 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2511027 ']' 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.150 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:48.150 [2024-07-24 19:05:54.107680] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:08:48.150 [2024-07-24 19:05:54.107782] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.150 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.408 [2024-07-24 19:05:54.176371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:48.408 [2024-07-24 19:05:54.294314] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.408 [2024-07-24 19:05:54.294374] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.408 [2024-07-24 19:05:54.294399] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.408 [2024-07-24 19:05:54.294431] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.408 [2024-07-24 19:05:54.294451] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.408 [2024-07-24 19:05:54.294561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:48.408 [2024-07-24 19:05:54.294619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:48.408 [2024-07-24 19:05:54.294650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:48.408 [2024-07-24 19:05:54.294662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.408 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.408 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:08:48.408 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:48.408 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:48.408 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:48.666 [2024-07-24 19:05:54.439780] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:48.666 Malloc0 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:48.666 [2024-07-24 19:05:54.490312] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:48.666 { 00:08:48.666 "params": { 00:08:48.666 "name": "Nvme$subsystem", 00:08:48.666 "trtype": "$TEST_TRANSPORT", 00:08:48.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:48.666 "adrfam": "ipv4", 00:08:48.666 "trsvcid": "$NVMF_PORT", 00:08:48.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:48.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:48.666 "hdgst": ${hdgst:-false}, 00:08:48.666 "ddgst": ${ddgst:-false} 00:08:48.666 }, 00:08:48.666 "method": "bdev_nvme_attach_controller" 00:08:48.666 } 00:08:48.666 EOF 00:08:48.666 )") 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:08:48.666 19:05:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:48.666 "params": { 00:08:48.666 "name": "Nvme1", 00:08:48.666 "trtype": "tcp", 00:08:48.666 "traddr": "10.0.0.2", 00:08:48.666 "adrfam": "ipv4", 00:08:48.666 "trsvcid": "4420", 00:08:48.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:48.666 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:48.666 "hdgst": false, 00:08:48.666 "ddgst": false 00:08:48.666 }, 00:08:48.666 "method": "bdev_nvme_attach_controller" 00:08:48.666 }' 00:08:48.666 [2024-07-24 19:05:54.541204] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:08:48.666 [2024-07-24 19:05:54.541295] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2511062 ] 00:08:48.666 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.666 [2024-07-24 19:05:54.603710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:48.924 [2024-07-24 19:05:54.726038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.924 [2024-07-24 19:05:54.726093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.924 [2024-07-24 19:05:54.726096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.181 I/O targets: 00:08:49.181 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:08:49.181 00:08:49.181 00:08:49.181 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.181 http://cunit.sourceforge.net/ 00:08:49.181 00:08:49.181 00:08:49.181 Suite: bdevio tests on: Nvme1n1 00:08:49.181 Test: blockdev write read block ...passed 00:08:49.181 Test: blockdev write zeroes read block ...passed 00:08:49.181 Test: blockdev write zeroes read no split ...passed 00:08:49.181 Test: blockdev write zeroes read split ...passed 00:08:49.440 Test: blockdev write zeroes read split partial ...passed 00:08:49.440 Test: blockdev reset ...[2024-07-24 19:05:55.223810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:08:49.440 [2024-07-24 19:05:55.223935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d83f60 (9): Bad file descriptor 00:08:49.440 [2024-07-24 19:05:55.292677] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:49.440 passed 00:08:49.440 Test: blockdev write read 8 blocks ...passed 00:08:49.440 Test: blockdev write read size > 128k ...passed 00:08:49.440 Test: blockdev write read invalid size ...passed 00:08:49.440 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:49.440 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:49.440 Test: blockdev write read max offset ...passed 00:08:49.440 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:49.440 Test: blockdev writev readv 8 blocks ...passed 00:08:49.440 Test: blockdev writev readv 30 x 1block ...passed 00:08:49.698 Test: blockdev writev readv block ...passed 00:08:49.698 Test: blockdev writev readv size > 128k ...passed 00:08:49.698 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:49.698 Test: blockdev comparev and writev ...[2024-07-24 19:05:55.468240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:49.698 [2024-07-24 19:05:55.468282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:08:49.698 [2024-07-24 19:05:55.468309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:49.698 [2024-07-24 19:05:55.468336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:08:49.698 [2024-07-24 19:05:55.468688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:49.698 [2024-07-24 19:05:55.468719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:08:49.698 [2024-07-24 19:05:55.468743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:49.698 [2024-07-24 19:05:55.468761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:08:49.698 [2024-07-24 19:05:55.469109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:49.698 [2024-07-24 19:05:55.469133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:08:49.698 [2024-07-24 19:05:55.469157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:49.698 [2024-07-24 19:05:55.469174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:08:49.698 [2024-07-24 19:05:55.469526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:49.698 [2024-07-24 19:05:55.469551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:08:49.698 [2024-07-24 19:05:55.469575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:49.698 [2024-07-24 19:05:55.469591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:08:49.698 passed 00:08:49.698 Test: blockdev nvme passthru rw ...passed 00:08:49.698 Test: blockdev nvme passthru vendor specific ...[2024-07-24 19:05:55.552788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:49.698 [2024-07-24 19:05:55.552817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:08:49.698 [2024-07-24 19:05:55.552990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:49.698 [2024-07-24 19:05:55.553013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:08:49.698 [2024-07-24 19:05:55.553182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:49.698 [2024-07-24 19:05:55.553205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:08:49.698 [2024-07-24 19:05:55.553369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:49.698 [2024-07-24 19:05:55.553393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:08:49.698 passed 00:08:49.698 Test: blockdev nvme admin passthru ...passed 00:08:49.698 Test: blockdev copy ...passed 00:08:49.698 00:08:49.698 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.698 suites 1 1 n/a 0 0 00:08:49.698 tests 23 23 23 0 0 00:08:49.698 asserts 152 152 152 0 n/a 00:08:49.698 00:08:49.698 Elapsed time = 1.164 seconds 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:49.957 rmmod nvme_tcp 00:08:49.957 rmmod nvme_fabrics 00:08:49.957 rmmod nvme_keyring 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2511027 ']' 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2511027 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2511027 ']' 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2511027 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2511027 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2511027' 00:08:49.957 killing process with pid 2511027 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2511027 00:08:49.957 19:05:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2511027 00:08:50.216 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:50.216 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:50.216 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:50.216 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:50.216 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:50.216 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.216 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.216 19:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.759 19:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:52.759 00:08:52.759 real 0m5.970s 00:08:52.759 user 0m10.193s 00:08:52.759 sys 0m1.828s 00:08:52.759 19:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:52.760 ************************************ 00:08:52.760 END TEST nvmf_bdevio 00:08:52.760 ************************************ 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:52.760 00:08:52.760 real 3m49.598s 00:08:52.760 user 10m4.924s 00:08:52.760 sys 1m5.049s 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:52.760 ************************************ 00:08:52.760 END TEST nvmf_target_core 00:08:52.760 ************************************ 00:08:52.760 19:05:58 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:52.760 19:05:58 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:52.760 19:05:58 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.760 19:05:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:52.760 ************************************ 00:08:52.760 START TEST nvmf_target_extra 00:08:52.760 ************************************ 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:52.760 * Looking for test storage... 00:08:52.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:08:52.760 ************************************ 00:08:52.760 START TEST nvmf_example 00:08:52.760 ************************************ 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:52.760 * Looking for test storage... 00:08:52.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.760 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:52.761 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:54.138 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:54.138 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.138 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:54.139 Found net devices under 0000:08:00.0: cvl_0_0 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:54.139 Found net devices under 0000:08:00.1: cvl_0_1 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:54.139 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:54.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:08:54.397 00:08:54.397 --- 10.0.0.2 ping statistics --- 00:08:54.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.397 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:08:54.397 00:08:54.397 --- 10.0.0.1 ping statistics --- 00:08:54.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.397 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2512736 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2512736 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2512736 ']' 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.397 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.398 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.398 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.398 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:54.398 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.656 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:54.914 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.914 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:54.914 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:54.914 EAL: No free 2048 kB hugepages reported on node 1 00:09:07.122 Initializing NVMe Controllers 00:09:07.122 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:07.122 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:07.122 Initialization complete. Launching workers. 00:09:07.122 ======================================================== 00:09:07.122 Latency(us) 00:09:07.122 Device Information : IOPS MiB/s Average min max 00:09:07.122 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13876.40 54.20 4613.69 836.20 19074.17 00:09:07.122 ======================================================== 00:09:07.122 Total : 13876.40 54.20 4613.69 836.20 19074.17 00:09:07.122 00:09:07.122 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:07.122 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:07.122 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:07.122 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:09:07.122 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:07.122 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:09:07.122 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:07.122 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:07.122 rmmod nvme_tcp 00:09:07.122 rmmod nvme_fabrics 00:09:07.122 rmmod nvme_keyring 00:09:07.122 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:07.122 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:09:07.122 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:09:07.122 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2512736 ']' 00:09:07.122 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2512736 00:09:07.122 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2512736 ']' 00:09:07.122 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2512736 00:09:07.123 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:09:07.123 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:07.123 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2512736 00:09:07.123 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:09:07.123 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:09:07.123 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2512736' 00:09:07.123 killing process with pid 2512736 00:09:07.123 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2512736 00:09:07.123 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2512736 00:09:07.123 nvmf threads initialize successfully 00:09:07.123 bdev subsystem init successfully 00:09:07.123 created a nvmf target service 00:09:07.123 create targets's poll groups done 00:09:07.123 all subsystems of target started 00:09:07.123 nvmf target is running 00:09:07.123 all subsystems of target stopped 00:09:07.123 destroy targets's poll groups done 00:09:07.123 destroyed the nvmf target service 00:09:07.123 bdev subsystem finish successfully 00:09:07.123 nvmf threads destroy successfully 00:09:07.123 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:07.123 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:07.123 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:07.123 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:07.123 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:07.123 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.123 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.123 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:07.695 00:09:07.695 real 0m15.122s 00:09:07.695 user 0m42.979s 00:09:07.695 sys 0m3.043s 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:07.695 ************************************ 00:09:07.695 END TEST nvmf_example 00:09:07.695 ************************************ 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:07.695 ************************************ 00:09:07.695 START TEST nvmf_filesystem 00:09:07.695 ************************************ 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:07.695 * Looking for test storage... 00:09:07.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:07.695 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:07.696 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:07.696 #define SPDK_CONFIG_H 00:09:07.696 #define SPDK_CONFIG_APPS 1 00:09:07.696 #define SPDK_CONFIG_ARCH native 00:09:07.696 #undef SPDK_CONFIG_ASAN 00:09:07.696 #undef SPDK_CONFIG_AVAHI 00:09:07.696 #undef SPDK_CONFIG_CET 00:09:07.696 #define SPDK_CONFIG_COVERAGE 1 00:09:07.696 #define SPDK_CONFIG_CROSS_PREFIX 00:09:07.696 #undef SPDK_CONFIG_CRYPTO 00:09:07.696 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:07.696 #undef SPDK_CONFIG_CUSTOMOCF 00:09:07.696 #undef SPDK_CONFIG_DAOS 00:09:07.696 #define SPDK_CONFIG_DAOS_DIR 00:09:07.696 #define SPDK_CONFIG_DEBUG 1 00:09:07.696 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:07.696 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:07.697 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:07.697 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:07.697 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:07.697 #undef SPDK_CONFIG_DPDK_UADK 00:09:07.697 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:07.697 #define SPDK_CONFIG_EXAMPLES 1 00:09:07.697 #undef SPDK_CONFIG_FC 00:09:07.697 #define SPDK_CONFIG_FC_PATH 00:09:07.697 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:07.697 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:07.697 #undef SPDK_CONFIG_FUSE 00:09:07.697 #undef SPDK_CONFIG_FUZZER 00:09:07.697 #define SPDK_CONFIG_FUZZER_LIB 00:09:07.697 #undef SPDK_CONFIG_GOLANG 00:09:07.697 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:07.697 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:07.697 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:07.697 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:07.697 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:07.697 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:07.697 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:07.697 #define SPDK_CONFIG_IDXD 1 00:09:07.697 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:07.697 #undef SPDK_CONFIG_IPSEC_MB 00:09:07.697 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:07.697 #define SPDK_CONFIG_ISAL 1 00:09:07.697 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:07.697 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:07.697 #define SPDK_CONFIG_LIBDIR 00:09:07.697 #undef SPDK_CONFIG_LTO 00:09:07.697 #define SPDK_CONFIG_MAX_LCORES 128 00:09:07.697 #define SPDK_CONFIG_NVME_CUSE 1 00:09:07.697 #undef SPDK_CONFIG_OCF 00:09:07.697 #define SPDK_CONFIG_OCF_PATH 00:09:07.697 #define SPDK_CONFIG_OPENSSL_PATH 00:09:07.697 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:07.697 #define SPDK_CONFIG_PGO_DIR 00:09:07.697 #undef SPDK_CONFIG_PGO_USE 00:09:07.697 #define SPDK_CONFIG_PREFIX /usr/local 00:09:07.697 #undef SPDK_CONFIG_RAID5F 00:09:07.697 #undef SPDK_CONFIG_RBD 00:09:07.697 #define SPDK_CONFIG_RDMA 1 00:09:07.697 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:07.697 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:07.697 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:07.697 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:07.697 #define SPDK_CONFIG_SHARED 1 00:09:07.697 #undef SPDK_CONFIG_SMA 00:09:07.697 #define SPDK_CONFIG_TESTS 1 00:09:07.697 #undef SPDK_CONFIG_TSAN 00:09:07.697 #define SPDK_CONFIG_UBLK 1 00:09:07.697 #define SPDK_CONFIG_UBSAN 1 00:09:07.697 #undef SPDK_CONFIG_UNIT_TESTS 00:09:07.697 #undef SPDK_CONFIG_URING 00:09:07.697 #define SPDK_CONFIG_URING_PATH 00:09:07.697 #undef SPDK_CONFIG_URING_ZNS 00:09:07.697 #undef SPDK_CONFIG_USDT 00:09:07.697 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:07.697 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:07.697 #define SPDK_CONFIG_VFIO_USER 1 00:09:07.697 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:07.697 #define SPDK_CONFIG_VHOST 1 00:09:07.697 #define SPDK_CONFIG_VIRTIO 1 00:09:07.697 #undef SPDK_CONFIG_VTUNE 00:09:07.697 #define SPDK_CONFIG_VTUNE_DIR 00:09:07.697 #define SPDK_CONFIG_WERROR 1 00:09:07.697 #define SPDK_CONFIG_WPDK_DIR 00:09:07.697 #undef SPDK_CONFIG_XNVME 00:09:07.697 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:07.697 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:07.698 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:07.699 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j32 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 2514650 ]] 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 2514650 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.LMIKmP 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.LMIKmP/tests/target /tmp/spdk.LMIKmP 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=1957711872 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=3326717952 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=42839941120 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=53546168320 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=10706227200 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=26761826304 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=26773082112 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=11255808 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=10687102976 00:09:07.700 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=10709233664 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22130688 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=26772365312 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=26773086208 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=720896 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=5354610688 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5354614784 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:09:07.701 * Looking for test storage... 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=42839941120 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=12920819712 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.701 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.702 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.961 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:07.961 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:07.961 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:07.961 19:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:09.339 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:09.340 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:09.340 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:09.340 Found net devices under 0000:08:00.0: cvl_0_0 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:09.340 Found net devices under 0000:08:00.1: cvl_0_1 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:09.340 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:09.599 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:09.599 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:09.599 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:09.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:09:09.599 00:09:09.599 --- 10.0.0.2 ping statistics --- 00:09:09.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.599 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:09:09.599 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:09.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:09:09.599 00:09:09.599 --- 10.0.0.1 ping statistics --- 00:09:09.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.599 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:09:09.599 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.599 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:09:09.599 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:09.599 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.599 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:09.599 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:09.600 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.600 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:09.600 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:09.600 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:09.600 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:09.600 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.600 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:09.600 ************************************ 00:09:09.600 START TEST nvmf_filesystem_no_in_capsule 00:09:09.600 ************************************ 00:09:09.600 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:09:09.600 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:09.600 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:09.600 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:09.600 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:09.600 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.600 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2515900 00:09:09.600 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:09.600 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2515900 00:09:09.600 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2515900 ']' 00:09:09.600 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.600 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:09.600 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.600 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:09.600 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.600 [2024-07-24 19:06:15.500247] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:09:09.600 [2024-07-24 19:06:15.500346] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.600 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.600 [2024-07-24 19:06:15.569271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.859 [2024-07-24 19:06:15.686705] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.859 [2024-07-24 19:06:15.686771] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.859 [2024-07-24 19:06:15.686787] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.859 [2024-07-24 19:06:15.686801] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.859 [2024-07-24 19:06:15.686812] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.859 [2024-07-24 19:06:15.686920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.859 [2024-07-24 19:06:15.687034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.859 [2024-07-24 19:06:15.687083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.859 [2024-07-24 19:06:15.687087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.859 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.859 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:09:09.859 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:09.859 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:09.859 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.859 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.859 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:09.859 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:09.859 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.859 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.859 [2024-07-24 19:06:15.846804] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.859 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.859 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:09.859 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.859 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.117 Malloc1 00:09:10.117 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.117 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:10.117 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.117 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.117 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.117 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:10.117 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.117 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.117 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.117 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.117 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.117 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.117 [2024-07-24 19:06:16.013988] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.118 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.118 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:10.118 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:10.118 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:10.118 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:10.118 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:10.118 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:10.118 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.118 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.118 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.118 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:10.118 { 00:09:10.118 "name": "Malloc1", 00:09:10.118 "aliases": [ 00:09:10.118 "4c65a70d-e7a5-40f3-9b66-4dd7ff13c9bd" 00:09:10.118 ], 00:09:10.118 "product_name": "Malloc disk", 00:09:10.118 "block_size": 512, 00:09:10.118 "num_blocks": 1048576, 00:09:10.118 "uuid": "4c65a70d-e7a5-40f3-9b66-4dd7ff13c9bd", 00:09:10.118 "assigned_rate_limits": { 00:09:10.118 "rw_ios_per_sec": 0, 00:09:10.118 "rw_mbytes_per_sec": 0, 00:09:10.118 "r_mbytes_per_sec": 0, 00:09:10.118 "w_mbytes_per_sec": 0 00:09:10.118 }, 00:09:10.118 "claimed": true, 00:09:10.118 "claim_type": "exclusive_write", 00:09:10.118 "zoned": false, 00:09:10.118 "supported_io_types": { 00:09:10.118 "read": true, 00:09:10.118 "write": true, 00:09:10.118 "unmap": true, 00:09:10.118 "flush": true, 00:09:10.118 "reset": true, 00:09:10.118 "nvme_admin": false, 00:09:10.118 "nvme_io": false, 00:09:10.118 "nvme_io_md": false, 00:09:10.118 "write_zeroes": true, 00:09:10.118 "zcopy": true, 00:09:10.118 "get_zone_info": false, 00:09:10.118 "zone_management": false, 00:09:10.118 "zone_append": false, 00:09:10.118 "compare": false, 00:09:10.118 "compare_and_write": false, 00:09:10.118 "abort": true, 00:09:10.118 "seek_hole": false, 00:09:10.118 "seek_data": false, 00:09:10.118 "copy": true, 00:09:10.118 "nvme_iov_md": false 00:09:10.118 }, 00:09:10.118 "memory_domains": [ 00:09:10.118 { 00:09:10.118 "dma_device_id": "system", 00:09:10.118 "dma_device_type": 1 00:09:10.118 }, 00:09:10.118 { 00:09:10.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.118 "dma_device_type": 2 00:09:10.118 } 00:09:10.118 ], 00:09:10.118 "driver_specific": {} 00:09:10.118 } 00:09:10.118 ]' 00:09:10.118 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:10.118 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:10.118 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:10.118 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:10.118 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:10.118 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:10.118 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:10.118 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:10.684 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:10.684 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:10.684 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:10.684 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:10.684 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:13.256 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:13.256 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:13.256 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.256 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:13.256 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.256 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:13.256 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:13.256 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:13.256 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:13.256 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:13.256 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:13.256 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:13.256 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:13.256 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:13.256 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:13.256 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:13.256 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:13.256 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:13.514 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:14.888 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:14.888 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:14.888 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:14.888 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.888 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:14.888 ************************************ 00:09:14.888 START TEST filesystem_ext4 00:09:14.888 ************************************ 00:09:14.888 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:14.888 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:14.888 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:14.888 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:14.888 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:09:14.888 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:14.888 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:09:14.888 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:09:14.888 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:09:14.888 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:09:14.888 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:14.888 mke2fs 1.46.5 (30-Dec-2021) 00:09:14.888 Discarding device blocks: 0/522240 done 00:09:14.888 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:14.888 Filesystem UUID: f94c2b19-d818-49ab-a4c5-63207ebbbc96 00:09:14.888 Superblock backups stored on blocks: 00:09:14.888 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:14.888 00:09:14.888 Allocating group tables: 0/64 done 00:09:14.888 Writing inode tables: 0/64 done 00:09:14.888 Creating journal (8192 blocks): done 00:09:14.888 Writing superblocks and filesystem accounting information: 0/64 done 00:09:14.888 00:09:14.888 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:09:14.888 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:15.147 19:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2515900 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:15.147 00:09:15.147 real 0m0.535s 00:09:15.147 user 0m0.018s 00:09:15.147 sys 0m0.060s 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:15.147 ************************************ 00:09:15.147 END TEST filesystem_ext4 00:09:15.147 ************************************ 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.147 ************************************ 00:09:15.147 START TEST filesystem_btrfs 00:09:15.147 ************************************ 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:09:15.147 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:15.407 btrfs-progs v6.6.2 00:09:15.407 See https://btrfs.readthedocs.io for more information. 00:09:15.407 00:09:15.407 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:15.407 NOTE: several default settings have changed in version 5.15, please make sure 00:09:15.407 this does not affect your deployments: 00:09:15.407 - DUP for metadata (-m dup) 00:09:15.407 - enabled no-holes (-O no-holes) 00:09:15.407 - enabled free-space-tree (-R free-space-tree) 00:09:15.407 00:09:15.407 Label: (null) 00:09:15.407 UUID: c04bd8b0-dfc7-402c-8a93-241be4174e61 00:09:15.407 Node size: 16384 00:09:15.407 Sector size: 4096 00:09:15.407 Filesystem size: 510.00MiB 00:09:15.407 Block group profiles: 00:09:15.407 Data: single 8.00MiB 00:09:15.407 Metadata: DUP 32.00MiB 00:09:15.407 System: DUP 8.00MiB 00:09:15.407 SSD detected: yes 00:09:15.407 Zoned device: no 00:09:15.407 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:15.407 Runtime features: free-space-tree 00:09:15.407 Checksum: crc32c 00:09:15.407 Number of devices: 1 00:09:15.407 Devices: 00:09:15.407 ID SIZE PATH 00:09:15.407 1 510.00MiB /dev/nvme0n1p1 00:09:15.407 00:09:15.407 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:09:15.407 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:16.341 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:16.341 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:16.341 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:16.341 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:16.341 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:16.341 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:16.341 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2515900 00:09:16.341 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:16.342 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:16.342 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:16.342 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:16.342 00:09:16.342 real 0m0.995s 00:09:16.342 user 0m0.016s 00:09:16.342 sys 0m0.120s 00:09:16.342 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.342 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:16.342 ************************************ 00:09:16.342 END TEST filesystem_btrfs 00:09:16.342 ************************************ 00:09:16.342 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:16.342 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:16.342 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.342 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:16.342 ************************************ 00:09:16.342 START TEST filesystem_xfs 00:09:16.342 ************************************ 00:09:16.342 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:09:16.342 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:16.342 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:16.342 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:16.342 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:09:16.342 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:16.342 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:09:16.342 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:09:16.342 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:09:16.342 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:09:16.342 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:16.342 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:16.342 = sectsz=512 attr=2, projid32bit=1 00:09:16.342 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:16.342 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:16.342 data = bsize=4096 blocks=130560, imaxpct=25 00:09:16.342 = sunit=0 swidth=0 blks 00:09:16.342 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:16.342 log =internal log bsize=4096 blocks=16384, version=2 00:09:16.342 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:16.342 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:17.276 Discarding blocks...Done. 00:09:17.276 19:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:09:17.276 19:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:19.807 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:19.807 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:19.807 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:19.807 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:19.807 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:19.807 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:19.807 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2515900 00:09:19.807 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:19.807 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:19.807 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:19.807 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:19.807 00:09:19.807 real 0m3.493s 00:09:19.807 user 0m0.019s 00:09:19.807 sys 0m0.063s 00:09:19.807 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.807 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:19.807 ************************************ 00:09:19.807 END TEST filesystem_xfs 00:09:19.807 ************************************ 00:09:19.807 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:19.807 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:19.807 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:19.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.807 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:19.807 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:19.807 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:19.807 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.065 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:20.065 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.065 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:20.066 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.066 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.066 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:20.066 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.066 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:20.066 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2515900 00:09:20.066 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2515900 ']' 00:09:20.066 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2515900 00:09:20.066 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:09:20.066 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:20.066 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2515900 00:09:20.066 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:20.066 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:20.066 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2515900' 00:09:20.066 killing process with pid 2515900 00:09:20.066 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 2515900 00:09:20.066 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 2515900 00:09:20.324 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:20.324 00:09:20.324 real 0m10.777s 00:09:20.324 user 0m41.080s 00:09:20.324 sys 0m1.703s 00:09:20.324 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.324 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:20.324 ************************************ 00:09:20.324 END TEST nvmf_filesystem_no_in_capsule 00:09:20.324 ************************************ 00:09:20.324 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:20.324 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:20.324 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.324 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.324 ************************************ 00:09:20.324 START TEST nvmf_filesystem_in_capsule 00:09:20.324 ************************************ 00:09:20.324 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:09:20.324 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:20.324 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:20.325 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:20.325 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:20.325 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:20.325 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2517122 00:09:20.325 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:20.325 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2517122 00:09:20.325 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2517122 ']' 00:09:20.325 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.325 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:20.325 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.325 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:20.325 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:20.325 [2024-07-24 19:06:26.321672] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:09:20.325 [2024-07-24 19:06:26.321774] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.584 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.584 [2024-07-24 19:06:26.386905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:20.584 [2024-07-24 19:06:26.503933] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.584 [2024-07-24 19:06:26.504004] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.584 [2024-07-24 19:06:26.504021] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.584 [2024-07-24 19:06:26.504034] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.584 [2024-07-24 19:06:26.504045] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.584 [2024-07-24 19:06:26.504151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.584 [2024-07-24 19:06:26.504227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.584 [2024-07-24 19:06:26.504279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.584 [2024-07-24 19:06:26.504282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:20.844 [2024-07-24 19:06:26.662063] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:20.844 Malloc1 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:20.844 [2024-07-24 19:06:26.831195] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:20.844 { 00:09:20.844 "name": "Malloc1", 00:09:20.844 "aliases": [ 00:09:20.844 "bd282b69-689a-4cc8-8346-1d1c1c908d6c" 00:09:20.844 ], 00:09:20.844 "product_name": "Malloc disk", 00:09:20.844 "block_size": 512, 00:09:20.844 "num_blocks": 1048576, 00:09:20.844 "uuid": "bd282b69-689a-4cc8-8346-1d1c1c908d6c", 00:09:20.844 "assigned_rate_limits": { 00:09:20.844 "rw_ios_per_sec": 0, 00:09:20.844 "rw_mbytes_per_sec": 0, 00:09:20.844 "r_mbytes_per_sec": 0, 00:09:20.844 "w_mbytes_per_sec": 0 00:09:20.844 }, 00:09:20.844 "claimed": true, 00:09:20.844 "claim_type": "exclusive_write", 00:09:20.844 "zoned": false, 00:09:20.844 "supported_io_types": { 00:09:20.844 "read": true, 00:09:20.844 "write": true, 00:09:20.844 "unmap": true, 00:09:20.844 "flush": true, 00:09:20.844 "reset": true, 00:09:20.844 "nvme_admin": false, 00:09:20.844 "nvme_io": false, 00:09:20.844 "nvme_io_md": false, 00:09:20.844 "write_zeroes": true, 00:09:20.844 "zcopy": true, 00:09:20.844 "get_zone_info": false, 00:09:20.844 "zone_management": false, 00:09:20.844 "zone_append": false, 00:09:20.844 "compare": false, 00:09:20.844 "compare_and_write": false, 00:09:20.844 "abort": true, 00:09:20.844 "seek_hole": false, 00:09:20.844 "seek_data": false, 00:09:20.844 "copy": true, 00:09:20.844 "nvme_iov_md": false 00:09:20.844 }, 00:09:20.844 "memory_domains": [ 00:09:20.844 { 00:09:20.844 "dma_device_id": "system", 00:09:20.844 "dma_device_type": 1 00:09:20.844 }, 00:09:20.844 { 00:09:20.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.844 "dma_device_type": 2 00:09:20.844 } 00:09:20.844 ], 00:09:20.844 "driver_specific": {} 00:09:20.844 } 00:09:20.844 ]' 00:09:20.844 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:21.102 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:21.102 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:21.102 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:21.102 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:21.102 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:21.102 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:21.102 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:21.669 19:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:21.669 19:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:21.669 19:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:21.669 19:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:21.669 19:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:23.567 19:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:23.567 19:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:23.567 19:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:23.567 19:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:23.567 19:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:23.567 19:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:23.567 19:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:23.567 19:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:23.567 19:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:23.567 19:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:23.567 19:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:23.567 19:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:23.567 19:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:23.567 19:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:23.567 19:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:23.567 19:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:23.567 19:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:24.132 19:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:25.066 19:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:25.998 19:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:25.999 19:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:25.999 19:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:25.999 19:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.999 19:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:25.999 ************************************ 00:09:25.999 START TEST filesystem_in_capsule_ext4 00:09:25.999 ************************************ 00:09:25.999 19:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:25.999 19:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:25.999 19:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:25.999 19:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:25.999 19:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:09:25.999 19:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:25.999 19:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:09:25.999 19:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:09:25.999 19:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:09:25.999 19:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:09:25.999 19:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:25.999 mke2fs 1.46.5 (30-Dec-2021) 00:09:25.999 Discarding device blocks: 0/522240 done 00:09:26.256 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:26.256 Filesystem UUID: 1a47538f-0bb9-47de-be31-9998115b7d61 00:09:26.256 Superblock backups stored on blocks: 00:09:26.256 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:26.256 00:09:26.256 Allocating group tables: 0/64 done 00:09:26.256 Writing inode tables: 0/64 done 00:09:26.514 Creating journal (8192 blocks): done 00:09:26.772 Writing superblocks and filesystem accounting information: 0/64 done 00:09:26.772 00:09:26.772 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:09:26.772 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:27.031 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:27.031 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:27.031 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:27.031 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:27.031 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:27.031 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:27.031 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2517122 00:09:27.031 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:27.031 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:27.031 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:27.031 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:27.031 00:09:27.031 real 0m1.023s 00:09:27.031 user 0m0.023s 00:09:27.031 sys 0m0.058s 00:09:27.031 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.031 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:27.031 ************************************ 00:09:27.031 END TEST filesystem_in_capsule_ext4 00:09:27.031 ************************************ 00:09:27.031 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:27.031 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:27.031 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.031 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:27.031 ************************************ 00:09:27.031 START TEST filesystem_in_capsule_btrfs 00:09:27.031 ************************************ 00:09:27.031 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:27.031 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:27.031 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:27.031 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:27.031 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:09:27.031 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:27.031 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:09:27.031 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:09:27.032 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:09:27.032 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:09:27.032 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:27.290 btrfs-progs v6.6.2 00:09:27.290 See https://btrfs.readthedocs.io for more information. 00:09:27.290 00:09:27.290 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:27.290 NOTE: several default settings have changed in version 5.15, please make sure 00:09:27.290 this does not affect your deployments: 00:09:27.290 - DUP for metadata (-m dup) 00:09:27.290 - enabled no-holes (-O no-holes) 00:09:27.290 - enabled free-space-tree (-R free-space-tree) 00:09:27.290 00:09:27.290 Label: (null) 00:09:27.290 UUID: a09b870c-2fdf-427c-b782-0697cfed6bad 00:09:27.290 Node size: 16384 00:09:27.290 Sector size: 4096 00:09:27.290 Filesystem size: 510.00MiB 00:09:27.290 Block group profiles: 00:09:27.290 Data: single 8.00MiB 00:09:27.290 Metadata: DUP 32.00MiB 00:09:27.290 System: DUP 8.00MiB 00:09:27.290 SSD detected: yes 00:09:27.290 Zoned device: no 00:09:27.290 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:27.290 Runtime features: free-space-tree 00:09:27.290 Checksum: crc32c 00:09:27.290 Number of devices: 1 00:09:27.290 Devices: 00:09:27.290 ID SIZE PATH 00:09:27.290 1 510.00MiB /dev/nvme0n1p1 00:09:27.290 00:09:27.290 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:09:27.290 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:27.548 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:27.548 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:27.548 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:27.548 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2517122 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:27.807 00:09:27.807 real 0m0.609s 00:09:27.807 user 0m0.022s 00:09:27.807 sys 0m0.113s 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:27.807 ************************************ 00:09:27.807 END TEST filesystem_in_capsule_btrfs 00:09:27.807 ************************************ 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:27.807 ************************************ 00:09:27.807 START TEST filesystem_in_capsule_xfs 00:09:27.807 ************************************ 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:09:27.807 19:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:27.807 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:27.807 = sectsz=512 attr=2, projid32bit=1 00:09:27.807 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:27.807 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:27.807 data = bsize=4096 blocks=130560, imaxpct=25 00:09:27.807 = sunit=0 swidth=0 blks 00:09:27.807 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:27.807 log =internal log bsize=4096 blocks=16384, version=2 00:09:27.807 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:27.807 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:28.739 Discarding blocks...Done. 00:09:28.739 19:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:09:28.739 19:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2517122 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:30.639 00:09:30.639 real 0m2.622s 00:09:30.639 user 0m0.018s 00:09:30.639 sys 0m0.056s 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:30.639 ************************************ 00:09:30.639 END TEST filesystem_in_capsule_xfs 00:09:30.639 ************************************ 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:30.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.639 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.898 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.898 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:30.898 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2517122 00:09:30.898 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2517122 ']' 00:09:30.898 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2517122 00:09:30.898 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:09:30.898 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:30.898 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2517122 00:09:30.898 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:30.898 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:30.898 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2517122' 00:09:30.898 killing process with pid 2517122 00:09:30.898 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 2517122 00:09:30.898 19:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 2517122 00:09:31.158 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:31.158 00:09:31.158 real 0m10.765s 00:09:31.158 user 0m41.077s 00:09:31.158 sys 0m1.681s 00:09:31.158 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.158 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:31.158 ************************************ 00:09:31.158 END TEST nvmf_filesystem_in_capsule 00:09:31.158 ************************************ 00:09:31.158 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:31.158 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:31.158 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:09:31.158 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:31.158 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:09:31.158 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:31.158 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:31.158 rmmod nvme_tcp 00:09:31.158 rmmod nvme_fabrics 00:09:31.158 rmmod nvme_keyring 00:09:31.158 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:31.158 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:09:31.158 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:09:31.158 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:31.158 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:31.158 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:31.158 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:31.158 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:31.158 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:31.158 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.158 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.158 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.696 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:33.696 00:09:33.696 real 0m25.622s 00:09:33.696 user 1m22.876s 00:09:33.696 sys 0m4.743s 00:09:33.696 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.696 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.696 ************************************ 00:09:33.696 END TEST nvmf_filesystem 00:09:33.696 ************************************ 00:09:33.696 19:06:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:33.696 19:06:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:33.696 19:06:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.696 19:06:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:33.696 ************************************ 00:09:33.696 START TEST nvmf_target_discovery 00:09:33.696 ************************************ 00:09:33.696 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:33.696 * Looking for test storage... 00:09:33.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.696 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.696 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:33.696 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.696 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.696 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.696 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.696 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.696 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.696 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:09:33.697 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:35.075 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.075 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:35.075 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:35.076 Found net devices under 0000:08:00.0: cvl_0_0 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:35.076 Found net devices under 0000:08:00.1: cvl_0_1 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:35.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:09:35.076 00:09:35.076 --- 10.0.0.2 ping statistics --- 00:09:35.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.076 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:35.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:09:35.076 00:09:35.076 --- 10.0.0.1 ping statistics --- 00:09:35.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.076 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:35.076 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:35.076 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:35.076 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:35.076 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:35.076 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.076 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2519722 00:09:35.076 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:35.076 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2519722 00:09:35.076 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 2519722 ']' 00:09:35.076 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.077 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.077 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.077 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.077 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.077 [2024-07-24 19:06:41.075292] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:09:35.077 [2024-07-24 19:06:41.075383] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.337 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.337 [2024-07-24 19:06:41.142432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.337 [2024-07-24 19:06:41.259248] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.337 [2024-07-24 19:06:41.259312] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.337 [2024-07-24 19:06:41.259329] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.337 [2024-07-24 19:06:41.259341] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.337 [2024-07-24 19:06:41.259354] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.337 [2024-07-24 19:06:41.259464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.337 [2024-07-24 19:06:41.259557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.337 [2024-07-24 19:06:41.259596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.337 [2024-07-24 19:06:41.259600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.596 [2024-07-24 19:06:41.398858] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.596 Null1 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.596 [2024-07-24 19:06:41.439150] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.596 Null2 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.596 Null3 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.596 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.597 Null4 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.597 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 4420 00:09:35.855 00:09:35.855 Discovery Log Number of Records 6, Generation counter 6 00:09:35.855 =====Discovery Log Entry 0====== 00:09:35.855 trtype: tcp 00:09:35.855 adrfam: ipv4 00:09:35.855 subtype: current discovery subsystem 00:09:35.855 treq: not required 00:09:35.855 portid: 0 00:09:35.855 trsvcid: 4420 00:09:35.855 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:35.855 traddr: 10.0.0.2 00:09:35.855 eflags: explicit discovery connections, duplicate discovery information 00:09:35.855 sectype: none 00:09:35.855 =====Discovery Log Entry 1====== 00:09:35.855 trtype: tcp 00:09:35.855 adrfam: ipv4 00:09:35.855 subtype: nvme subsystem 00:09:35.856 treq: not required 00:09:35.856 portid: 0 00:09:35.856 trsvcid: 4420 00:09:35.856 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:35.856 traddr: 10.0.0.2 00:09:35.856 eflags: none 00:09:35.856 sectype: none 00:09:35.856 =====Discovery Log Entry 2====== 00:09:35.856 trtype: tcp 00:09:35.856 adrfam: ipv4 00:09:35.856 subtype: nvme subsystem 00:09:35.856 treq: not required 00:09:35.856 portid: 0 00:09:35.856 trsvcid: 4420 00:09:35.856 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:35.856 traddr: 10.0.0.2 00:09:35.856 eflags: none 00:09:35.856 sectype: none 00:09:35.856 =====Discovery Log Entry 3====== 00:09:35.856 trtype: tcp 00:09:35.856 adrfam: ipv4 00:09:35.856 subtype: nvme subsystem 00:09:35.856 treq: not required 00:09:35.856 portid: 0 00:09:35.856 trsvcid: 4420 00:09:35.856 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:35.856 traddr: 10.0.0.2 00:09:35.856 eflags: none 00:09:35.856 sectype: none 00:09:35.856 =====Discovery Log Entry 4====== 00:09:35.856 trtype: tcp 00:09:35.856 adrfam: ipv4 00:09:35.856 subtype: nvme subsystem 00:09:35.856 treq: not required 00:09:35.856 portid: 0 00:09:35.856 trsvcid: 4420 00:09:35.856 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:35.856 traddr: 10.0.0.2 00:09:35.856 eflags: none 00:09:35.856 sectype: none 00:09:35.856 =====Discovery Log Entry 5====== 00:09:35.856 trtype: tcp 00:09:35.856 adrfam: ipv4 00:09:35.856 subtype: discovery subsystem referral 00:09:35.856 treq: not required 00:09:35.856 portid: 0 00:09:35.856 trsvcid: 4430 00:09:35.856 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:35.856 traddr: 10.0.0.2 00:09:35.856 eflags: none 00:09:35.856 sectype: none 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:35.856 Perform nvmf subsystem discovery via RPC 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.856 [ 00:09:35.856 { 00:09:35.856 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:35.856 "subtype": "Discovery", 00:09:35.856 "listen_addresses": [ 00:09:35.856 { 00:09:35.856 "trtype": "TCP", 00:09:35.856 "adrfam": "IPv4", 00:09:35.856 "traddr": "10.0.0.2", 00:09:35.856 "trsvcid": "4420" 00:09:35.856 } 00:09:35.856 ], 00:09:35.856 "allow_any_host": true, 00:09:35.856 "hosts": [] 00:09:35.856 }, 00:09:35.856 { 00:09:35.856 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:35.856 "subtype": "NVMe", 00:09:35.856 "listen_addresses": [ 00:09:35.856 { 00:09:35.856 "trtype": "TCP", 00:09:35.856 "adrfam": "IPv4", 00:09:35.856 "traddr": "10.0.0.2", 00:09:35.856 "trsvcid": "4420" 00:09:35.856 } 00:09:35.856 ], 00:09:35.856 "allow_any_host": true, 00:09:35.856 "hosts": [], 00:09:35.856 "serial_number": "SPDK00000000000001", 00:09:35.856 "model_number": "SPDK bdev Controller", 00:09:35.856 "max_namespaces": 32, 00:09:35.856 "min_cntlid": 1, 00:09:35.856 "max_cntlid": 65519, 00:09:35.856 "namespaces": [ 00:09:35.856 { 00:09:35.856 "nsid": 1, 00:09:35.856 "bdev_name": "Null1", 00:09:35.856 "name": "Null1", 00:09:35.856 "nguid": "168C2E6E14964E0EA0E6F98857C90D14", 00:09:35.856 "uuid": "168c2e6e-1496-4e0e-a0e6-f98857c90d14" 00:09:35.856 } 00:09:35.856 ] 00:09:35.856 }, 00:09:35.856 { 00:09:35.856 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:35.856 "subtype": "NVMe", 00:09:35.856 "listen_addresses": [ 00:09:35.856 { 00:09:35.856 "trtype": "TCP", 00:09:35.856 "adrfam": "IPv4", 00:09:35.856 "traddr": "10.0.0.2", 00:09:35.856 "trsvcid": "4420" 00:09:35.856 } 00:09:35.856 ], 00:09:35.856 "allow_any_host": true, 00:09:35.856 "hosts": [], 00:09:35.856 "serial_number": "SPDK00000000000002", 00:09:35.856 "model_number": "SPDK bdev Controller", 00:09:35.856 "max_namespaces": 32, 00:09:35.856 "min_cntlid": 1, 00:09:35.856 "max_cntlid": 65519, 00:09:35.856 "namespaces": [ 00:09:35.856 { 00:09:35.856 "nsid": 1, 00:09:35.856 "bdev_name": "Null2", 00:09:35.856 "name": "Null2", 00:09:35.856 "nguid": "896F3867C830465AA03F5671876D864B", 00:09:35.856 "uuid": "896f3867-c830-465a-a03f-5671876d864b" 00:09:35.856 } 00:09:35.856 ] 00:09:35.856 }, 00:09:35.856 { 00:09:35.856 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:35.856 "subtype": "NVMe", 00:09:35.856 "listen_addresses": [ 00:09:35.856 { 00:09:35.856 "trtype": "TCP", 00:09:35.856 "adrfam": "IPv4", 00:09:35.856 "traddr": "10.0.0.2", 00:09:35.856 "trsvcid": "4420" 00:09:35.856 } 00:09:35.856 ], 00:09:35.856 "allow_any_host": true, 00:09:35.856 "hosts": [], 00:09:35.856 "serial_number": "SPDK00000000000003", 00:09:35.856 "model_number": "SPDK bdev Controller", 00:09:35.856 "max_namespaces": 32, 00:09:35.856 "min_cntlid": 1, 00:09:35.856 "max_cntlid": 65519, 00:09:35.856 "namespaces": [ 00:09:35.856 { 00:09:35.856 "nsid": 1, 00:09:35.856 "bdev_name": "Null3", 00:09:35.856 "name": "Null3", 00:09:35.856 "nguid": "C53A1C309F804EF397F8FACF52B5F4E4", 00:09:35.856 "uuid": "c53a1c30-9f80-4ef3-97f8-facf52b5f4e4" 00:09:35.856 } 00:09:35.856 ] 00:09:35.856 }, 00:09:35.856 { 00:09:35.856 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:35.856 "subtype": "NVMe", 00:09:35.856 "listen_addresses": [ 00:09:35.856 { 00:09:35.856 "trtype": "TCP", 00:09:35.856 "adrfam": "IPv4", 00:09:35.856 "traddr": "10.0.0.2", 00:09:35.856 "trsvcid": "4420" 00:09:35.856 } 00:09:35.856 ], 00:09:35.856 "allow_any_host": true, 00:09:35.856 "hosts": [], 00:09:35.856 "serial_number": "SPDK00000000000004", 00:09:35.856 "model_number": "SPDK bdev Controller", 00:09:35.856 "max_namespaces": 32, 00:09:35.856 "min_cntlid": 1, 00:09:35.856 "max_cntlid": 65519, 00:09:35.856 "namespaces": [ 00:09:35.856 { 00:09:35.856 "nsid": 1, 00:09:35.856 "bdev_name": "Null4", 00:09:35.856 "name": "Null4", 00:09:35.856 "nguid": "BE7394D54D59490E97579ACAB8F3FE6D", 00:09:35.856 "uuid": "be7394d5-4d59-490e-9757-9acab8f3fe6d" 00:09:35.856 } 00:09:35.856 ] 00:09:35.856 } 00:09:35.856 ] 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:35.856 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:35.857 rmmod nvme_tcp 00:09:35.857 rmmod nvme_fabrics 00:09:35.857 rmmod nvme_keyring 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2519722 ']' 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2519722 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 2519722 ']' 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 2519722 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:35.857 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2519722 00:09:36.116 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:36.116 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:36.116 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2519722' 00:09:36.116 killing process with pid 2519722 00:09:36.116 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 2519722 00:09:36.116 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 2519722 00:09:36.116 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:36.116 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:36.116 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:36.116 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:36.116 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:36.116 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.116 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.116 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.652 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:38.652 00:09:38.652 real 0m4.952s 00:09:38.652 user 0m3.981s 00:09:38.652 sys 0m1.534s 00:09:38.652 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.652 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:38.652 ************************************ 00:09:38.652 END TEST nvmf_target_discovery 00:09:38.652 ************************************ 00:09:38.652 19:06:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:38.652 19:06:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:38.652 19:06:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.652 19:06:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:38.652 ************************************ 00:09:38.652 START TEST nvmf_referrals 00:09:38.652 ************************************ 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:38.653 * Looking for test storage... 00:09:38.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:38.653 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.042 19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:40.042 19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:40.042 19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:40.042 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:40.043 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:40.043 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:40.043 Found net devices under 0000:08:00.0: cvl_0_0 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:40.043 Found net devices under 0000:08:00.1: cvl_0_1 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:40.043 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:40.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:09:40.331 00:09:40.331 --- 10.0.0.2 ping statistics --- 00:09:40.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.331 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:40.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:09:40.331 00:09:40.331 --- 10.0.0.1 ping statistics --- 00:09:40.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.331 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2521342 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2521342 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 2521342 ']' 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:40.331 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.331 [2024-07-24 19:06:46.213908] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:09:40.331 [2024-07-24 19:06:46.214002] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.331 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.331 [2024-07-24 19:06:46.295028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:40.603 [2024-07-24 19:06:46.450536] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.603 [2024-07-24 19:06:46.450611] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.603 [2024-07-24 19:06:46.450642] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.603 [2024-07-24 19:06:46.450668] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.603 [2024-07-24 19:06:46.450690] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.603 [2024-07-24 19:06:46.450787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.603 [2024-07-24 19:06:46.450845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.604 [2024-07-24 19:06:46.450902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.604 [2024-07-24 19:06:46.450912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.604 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.604 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:09:40.604 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:40.604 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:40.604 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.861 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.862 [2024-07-24 19:06:46.623799] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.862 [2024-07-24 19:06:46.636024] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:40.862 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:41.119 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:41.120 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:41.120 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:41.120 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:41.120 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:41.120 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:41.120 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.120 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.120 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.120 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:41.120 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.120 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.120 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:41.377 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:41.635 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:41.635 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:41.635 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:41.635 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:41.635 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:41.635 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:41.635 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:41.635 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:41.635 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.635 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.635 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.635 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:41.635 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:41.635 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:41.635 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.635 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:41.635 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.636 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:41.636 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.636 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:41.636 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:41.636 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:41.636 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:41.636 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:41.636 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:41.636 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:41.636 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:41.893 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:41.893 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:41.893 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:41.893 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:41.893 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:41.893 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:41.893 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:41.893 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:41.893 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:41.893 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:41.893 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:41.893 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:41.893 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:42.150 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:42.150 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:42.150 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.150 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:42.150 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.150 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:42.150 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.150 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:42.150 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:42.150 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.150 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:42.150 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:42.151 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:42.151 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:42.151 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:42.151 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:42.151 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:42.151 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:42.151 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:42.151 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:42.151 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:42.151 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:42.151 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:42.151 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:42.151 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:42.151 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:42.151 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:42.409 rmmod nvme_tcp 00:09:42.409 rmmod nvme_fabrics 00:09:42.409 rmmod nvme_keyring 00:09:42.409 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:42.409 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:42.409 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:42.409 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2521342 ']' 00:09:42.409 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2521342 00:09:42.409 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 2521342 ']' 00:09:42.409 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 2521342 00:09:42.409 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:09:42.409 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:42.409 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2521342 00:09:42.409 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:42.409 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:42.409 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2521342' 00:09:42.409 killing process with pid 2521342 00:09:42.409 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 2521342 00:09:42.409 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 2521342 00:09:42.667 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:42.667 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:42.667 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:42.667 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:42.667 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:42.667 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.667 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.667 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.574 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:44.574 00:09:44.575 real 0m6.311s 00:09:44.575 user 0m9.532s 00:09:44.575 sys 0m1.934s 00:09:44.575 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.575 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:44.575 ************************************ 00:09:44.575 END TEST nvmf_referrals 00:09:44.575 ************************************ 00:09:44.575 19:06:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:44.575 19:06:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:44.575 19:06:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.575 19:06:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:44.575 ************************************ 00:09:44.575 START TEST nvmf_connect_disconnect 00:09:44.575 ************************************ 00:09:44.575 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:44.833 * Looking for test storage... 00:09:44.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:44.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:46.740 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:46.740 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:46.740 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:46.740 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:46.740 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:46.740 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:46.740 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:46.740 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:46.740 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:46.740 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:46.740 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:46.740 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:46.741 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:46.741 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:46.741 Found net devices under 0000:08:00.0: cvl_0_0 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:46.741 Found net devices under 0000:08:00.1: cvl_0_1 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:46.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:09:46.741 00:09:46.741 --- 10.0.0.2 ping statistics --- 00:09:46.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.741 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:46.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:09:46.741 00:09:46.741 --- 10.0.0.1 ping statistics --- 00:09:46.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.741 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:46.741 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.742 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:46.742 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:46.742 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:46.742 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:46.742 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:46.742 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:46.742 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2523054 00:09:46.742 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:46.742 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2523054 00:09:46.742 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2523054 ']' 00:09:46.742 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.742 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:46.742 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.742 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:46.742 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:46.742 [2024-07-24 19:06:52.485299] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:09:46.742 [2024-07-24 19:06:52.485396] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.742 EAL: No free 2048 kB hugepages reported on node 1 00:09:46.742 [2024-07-24 19:06:52.551893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:46.742 [2024-07-24 19:06:52.669386] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.742 [2024-07-24 19:06:52.669450] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.742 [2024-07-24 19:06:52.669476] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.742 [2024-07-24 19:06:52.669506] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.742 [2024-07-24 19:06:52.669525] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.742 [2024-07-24 19:06:52.669631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.742 [2024-07-24 19:06:52.669721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.742 [2024-07-24 19:06:52.669776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:46.742 [2024-07-24 19:06:52.669785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:47.001 [2024-07-24 19:06:52.819828] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:47.001 [2024-07-24 19:06:52.874342] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:47.001 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:49.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.633 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:59.633 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:59.633 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:59.633 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:59.633 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:59.633 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:59.633 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:59.633 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:59.633 rmmod nvme_tcp 00:09:59.633 rmmod nvme_fabrics 00:09:59.633 rmmod nvme_keyring 00:09:59.891 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:59.891 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:59.891 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:59.891 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2523054 ']' 00:09:59.891 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2523054 00:09:59.891 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2523054 ']' 00:09:59.891 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2523054 00:09:59.891 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:09:59.891 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:59.891 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2523054 00:09:59.891 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:59.891 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:59.891 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2523054' 00:09:59.891 killing process with pid 2523054 00:09:59.891 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2523054 00:09:59.891 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2523054 00:10:00.151 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:00.151 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:00.151 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:00.151 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:00.151 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:00.151 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.151 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.151 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.057 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:02.057 00:10:02.057 real 0m17.402s 00:10:02.057 user 0m52.389s 00:10:02.057 sys 0m2.865s 00:10:02.057 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:02.057 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:02.057 ************************************ 00:10:02.057 END TEST nvmf_connect_disconnect 00:10:02.057 ************************************ 00:10:02.057 19:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:02.057 19:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:02.057 19:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:02.057 19:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:02.057 ************************************ 00:10:02.057 START TEST nvmf_multitarget 00:10:02.057 ************************************ 00:10:02.057 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:02.315 * Looking for test storage... 00:10:02.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.315 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:10:02.316 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:03.695 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:10:03.696 Found 0000:08:00.0 (0x8086 - 0x159b) 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:10:03.696 Found 0000:08:00.1 (0x8086 - 0x159b) 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:10:03.696 Found net devices under 0000:08:00.0: cvl_0_0 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:10:03.696 Found net devices under 0000:08:00.1: cvl_0_1 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:03.696 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:03.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:10:03.954 00:10:03.954 --- 10.0.0.2 ping statistics --- 00:10:03.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.954 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:03.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:10:03.954 00:10:03.954 --- 10.0.0.1 ping statistics --- 00:10:03.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.954 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2525850 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2525850 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2525850 ']' 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.954 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:03.955 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:03.955 [2024-07-24 19:07:09.882317] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:10:03.955 [2024-07-24 19:07:09.882411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.955 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.955 [2024-07-24 19:07:09.948634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:04.212 [2024-07-24 19:07:10.067012] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.213 [2024-07-24 19:07:10.067070] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.213 [2024-07-24 19:07:10.067086] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.213 [2024-07-24 19:07:10.067099] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.213 [2024-07-24 19:07:10.067110] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.213 [2024-07-24 19:07:10.067195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.213 [2024-07-24 19:07:10.067249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.213 [2024-07-24 19:07:10.067300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:04.213 [2024-07-24 19:07:10.067303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.213 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:04.213 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:10:04.213 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:04.213 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:04.213 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:04.213 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.213 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:04.213 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:04.213 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:04.470 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:04.470 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:04.470 "nvmf_tgt_1" 00:10:04.470 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:04.728 "nvmf_tgt_2" 00:10:04.728 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:04.728 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:04.728 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:04.728 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:04.986 true 00:10:04.986 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:04.986 true 00:10:04.986 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:04.986 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:05.245 rmmod nvme_tcp 00:10:05.245 rmmod nvme_fabrics 00:10:05.245 rmmod nvme_keyring 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2525850 ']' 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2525850 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2525850 ']' 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2525850 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2525850 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2525850' 00:10:05.245 killing process with pid 2525850 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2525850 00:10:05.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2525850 00:10:05.505 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:05.505 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:05.505 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:05.505 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:05.505 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:05.505 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.505 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.505 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:08.047 00:10:08.047 real 0m5.432s 00:10:08.047 user 0m6.747s 00:10:08.047 sys 0m1.633s 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:08.047 ************************************ 00:10:08.047 END TEST nvmf_multitarget 00:10:08.047 ************************************ 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:08.047 ************************************ 00:10:08.047 START TEST nvmf_rpc 00:10:08.047 ************************************ 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:08.047 * Looking for test storage... 00:10:08.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:10:08.047 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:10:09.425 Found 0000:08:00.0 (0x8086 - 0x159b) 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:10:09.425 Found 0000:08:00.1 (0x8086 - 0x159b) 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:10:09.425 Found net devices under 0000:08:00.0: cvl_0_0 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:10:09.425 Found net devices under 0000:08:00.1: cvl_0_1 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:09.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:09.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:10:09.425 00:10:09.425 --- 10.0.0.2 ping statistics --- 00:10:09.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.425 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:09.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:09.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:10:09.425 00:10:09.425 --- 10.0.0.1 ping statistics --- 00:10:09.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.425 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:10:09.425 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:09.426 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:10:09.426 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:09.426 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:09.426 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:09.426 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:09.426 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:09.426 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:09.426 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:09.426 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:09.426 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:09.426 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:09.426 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.426 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2527482 00:10:09.426 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:09.426 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2527482 00:10:09.426 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2527482 ']' 00:10:09.426 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.426 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:09.426 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.426 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:09.426 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.683 [2024-07-24 19:07:15.472151] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:10:09.683 [2024-07-24 19:07:15.472245] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.683 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.683 [2024-07-24 19:07:15.542448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:09.683 [2024-07-24 19:07:15.663141] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.683 [2024-07-24 19:07:15.663207] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.683 [2024-07-24 19:07:15.663223] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.683 [2024-07-24 19:07:15.663237] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.683 [2024-07-24 19:07:15.663248] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.683 [2024-07-24 19:07:15.663306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.683 [2024-07-24 19:07:15.663365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.683 [2024-07-24 19:07:15.663415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:09.683 [2024-07-24 19:07:15.663419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:09.982 "tick_rate": 2700000000, 00:10:09.982 "poll_groups": [ 00:10:09.982 { 00:10:09.982 "name": "nvmf_tgt_poll_group_000", 00:10:09.982 "admin_qpairs": 0, 00:10:09.982 "io_qpairs": 0, 00:10:09.982 "current_admin_qpairs": 0, 00:10:09.982 "current_io_qpairs": 0, 00:10:09.982 "pending_bdev_io": 0, 00:10:09.982 "completed_nvme_io": 0, 00:10:09.982 "transports": [] 00:10:09.982 }, 00:10:09.982 { 00:10:09.982 "name": "nvmf_tgt_poll_group_001", 00:10:09.982 "admin_qpairs": 0, 00:10:09.982 "io_qpairs": 0, 00:10:09.982 "current_admin_qpairs": 0, 00:10:09.982 "current_io_qpairs": 0, 00:10:09.982 "pending_bdev_io": 0, 00:10:09.982 "completed_nvme_io": 0, 00:10:09.982 "transports": [] 00:10:09.982 }, 00:10:09.982 { 00:10:09.982 "name": "nvmf_tgt_poll_group_002", 00:10:09.982 "admin_qpairs": 0, 00:10:09.982 "io_qpairs": 0, 00:10:09.982 "current_admin_qpairs": 0, 00:10:09.982 "current_io_qpairs": 0, 00:10:09.982 "pending_bdev_io": 0, 00:10:09.982 "completed_nvme_io": 0, 00:10:09.982 "transports": [] 00:10:09.982 }, 00:10:09.982 { 00:10:09.982 "name": "nvmf_tgt_poll_group_003", 00:10:09.982 "admin_qpairs": 0, 00:10:09.982 "io_qpairs": 0, 00:10:09.982 "current_admin_qpairs": 0, 00:10:09.982 "current_io_qpairs": 0, 00:10:09.982 "pending_bdev_io": 0, 00:10:09.982 "completed_nvme_io": 0, 00:10:09.982 "transports": [] 00:10:09.982 } 00:10:09.982 ] 00:10:09.982 }' 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.982 [2024-07-24 19:07:15.908986] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:09.982 "tick_rate": 2700000000, 00:10:09.982 "poll_groups": [ 00:10:09.982 { 00:10:09.982 "name": "nvmf_tgt_poll_group_000", 00:10:09.982 "admin_qpairs": 0, 00:10:09.982 "io_qpairs": 0, 00:10:09.982 "current_admin_qpairs": 0, 00:10:09.982 "current_io_qpairs": 0, 00:10:09.982 "pending_bdev_io": 0, 00:10:09.982 "completed_nvme_io": 0, 00:10:09.982 "transports": [ 00:10:09.982 { 00:10:09.982 "trtype": "TCP" 00:10:09.982 } 00:10:09.982 ] 00:10:09.982 }, 00:10:09.982 { 00:10:09.982 "name": "nvmf_tgt_poll_group_001", 00:10:09.982 "admin_qpairs": 0, 00:10:09.982 "io_qpairs": 0, 00:10:09.982 "current_admin_qpairs": 0, 00:10:09.982 "current_io_qpairs": 0, 00:10:09.982 "pending_bdev_io": 0, 00:10:09.982 "completed_nvme_io": 0, 00:10:09.982 "transports": [ 00:10:09.982 { 00:10:09.982 "trtype": "TCP" 00:10:09.982 } 00:10:09.982 ] 00:10:09.982 }, 00:10:09.982 { 00:10:09.982 "name": "nvmf_tgt_poll_group_002", 00:10:09.982 "admin_qpairs": 0, 00:10:09.982 "io_qpairs": 0, 00:10:09.982 "current_admin_qpairs": 0, 00:10:09.982 "current_io_qpairs": 0, 00:10:09.982 "pending_bdev_io": 0, 00:10:09.982 "completed_nvme_io": 0, 00:10:09.982 "transports": [ 00:10:09.982 { 00:10:09.982 "trtype": "TCP" 00:10:09.982 } 00:10:09.982 ] 00:10:09.982 }, 00:10:09.982 { 00:10:09.982 "name": "nvmf_tgt_poll_group_003", 00:10:09.982 "admin_qpairs": 0, 00:10:09.982 "io_qpairs": 0, 00:10:09.982 "current_admin_qpairs": 0, 00:10:09.982 "current_io_qpairs": 0, 00:10:09.982 "pending_bdev_io": 0, 00:10:09.982 "completed_nvme_io": 0, 00:10:09.982 "transports": [ 00:10:09.982 { 00:10:09.982 "trtype": "TCP" 00:10:09.982 } 00:10:09.982 ] 00:10:09.982 } 00:10:09.982 ] 00:10:09.982 }' 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:09.982 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.253 Malloc1 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.253 [2024-07-24 19:07:16.071877] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:10:10.253 [2024-07-24 19:07:16.094355] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc' 00:10:10.253 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:10.253 could not add new controller: failed to write to nvme-fabrics device 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.253 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:10.818 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:10.818 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:10.818 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:10.818 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:10.818 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:12.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:12.714 [2024-07-24 19:07:18.671899] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc' 00:10:12.714 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:12.714 could not add new controller: failed to write to nvme-fabrics device 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:12.714 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.715 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.715 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.715 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:13.280 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:13.280 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:13.280 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:13.280 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:13.280 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:15.177 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:15.177 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:15.177 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:15.177 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:15.177 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:15.177 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:15.177 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:15.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.435 [2024-07-24 19:07:21.300170] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.435 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:16.000 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:16.000 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:16.000 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:16.000 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:16.000 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:17.895 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:17.895 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:17.895 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:17.895 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:17.895 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:17.895 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:17.895 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:17.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.895 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:17.895 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:17.895 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:17.895 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.895 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:17.895 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.152 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:18.152 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:18.152 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.152 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.152 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.152 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:18.152 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.152 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.152 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.152 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:18.152 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:18.152 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.152 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.152 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.152 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.152 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.152 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.152 [2024-07-24 19:07:23.944754] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.153 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.153 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:18.153 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.153 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.153 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.153 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:18.153 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.153 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.153 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.153 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:18.718 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:18.718 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:18.718 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:18.718 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:18.718 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:20.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.616 [2024-07-24 19:07:26.601852] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.616 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:21.182 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:21.182 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:21.182 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:21.182 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:21.182 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:23.080 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:23.080 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:23.080 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:23.080 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:23.080 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:23.080 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:23.080 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:23.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.338 [2024-07-24 19:07:29.212901] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.338 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:23.904 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:23.904 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:23.904 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:23.904 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:23.904 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:25.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.805 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.063 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.063 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:26.063 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.063 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.063 [2024-07-24 19:07:31.828178] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.063 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.063 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:26.063 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.063 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.063 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.063 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:26.063 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.063 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.063 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.063 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:26.322 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:26.322 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:26.322 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:26.322 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:26.322 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:28.848 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:28.848 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:28.848 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:28.848 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:28.848 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:28.848 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:28.848 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:28.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.848 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:28.848 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:28.848 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:28.848 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.848 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:28.848 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.848 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.849 [2024-07-24 19:07:34.400815] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.849 [2024-07-24 19:07:34.448868] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.849 [2024-07-24 19:07:34.497046] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.849 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.850 [2024-07-24 19:07:34.545211] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.850 [2024-07-24 19:07:34.593378] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:28.850 "tick_rate": 2700000000, 00:10:28.850 "poll_groups": [ 00:10:28.850 { 00:10:28.850 "name": "nvmf_tgt_poll_group_000", 00:10:28.850 "admin_qpairs": 2, 00:10:28.850 "io_qpairs": 56, 00:10:28.850 "current_admin_qpairs": 0, 00:10:28.850 "current_io_qpairs": 0, 00:10:28.850 "pending_bdev_io": 0, 00:10:28.850 "completed_nvme_io": 188, 00:10:28.850 "transports": [ 00:10:28.850 { 00:10:28.850 "trtype": "TCP" 00:10:28.850 } 00:10:28.850 ] 00:10:28.850 }, 00:10:28.850 { 00:10:28.850 "name": "nvmf_tgt_poll_group_001", 00:10:28.850 "admin_qpairs": 2, 00:10:28.850 "io_qpairs": 56, 00:10:28.850 "current_admin_qpairs": 0, 00:10:28.850 "current_io_qpairs": 0, 00:10:28.850 "pending_bdev_io": 0, 00:10:28.850 "completed_nvme_io": 206, 00:10:28.850 "transports": [ 00:10:28.850 { 00:10:28.850 "trtype": "TCP" 00:10:28.850 } 00:10:28.850 ] 00:10:28.850 }, 00:10:28.850 { 00:10:28.850 "name": "nvmf_tgt_poll_group_002", 00:10:28.850 "admin_qpairs": 1, 00:10:28.850 "io_qpairs": 56, 00:10:28.850 "current_admin_qpairs": 0, 00:10:28.850 "current_io_qpairs": 0, 00:10:28.850 "pending_bdev_io": 0, 00:10:28.850 "completed_nvme_io": 121, 00:10:28.850 "transports": [ 00:10:28.850 { 00:10:28.850 "trtype": "TCP" 00:10:28.850 } 00:10:28.850 ] 00:10:28.850 }, 00:10:28.850 { 00:10:28.850 "name": "nvmf_tgt_poll_group_003", 00:10:28.850 "admin_qpairs": 2, 00:10:28.850 "io_qpairs": 56, 00:10:28.850 "current_admin_qpairs": 0, 00:10:28.850 "current_io_qpairs": 0, 00:10:28.850 "pending_bdev_io": 0, 00:10:28.850 "completed_nvme_io": 59, 00:10:28.850 "transports": [ 00:10:28.850 { 00:10:28.850 "trtype": "TCP" 00:10:28.850 } 00:10:28.850 ] 00:10:28.850 } 00:10:28.850 ] 00:10:28.850 }' 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 224 > 0 )) 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:28.850 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:28.850 rmmod nvme_tcp 00:10:28.850 rmmod nvme_fabrics 00:10:28.850 rmmod nvme_keyring 00:10:28.851 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:28.851 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:28.851 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:28.851 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2527482 ']' 00:10:28.851 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2527482 00:10:28.851 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2527482 ']' 00:10:28.851 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2527482 00:10:28.851 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:10:28.851 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:28.851 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2527482 00:10:28.851 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:28.851 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:28.851 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2527482' 00:10:28.851 killing process with pid 2527482 00:10:28.851 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2527482 00:10:28.851 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2527482 00:10:29.111 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:29.111 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:29.111 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:29.111 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:29.111 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:29.111 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.111 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.111 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:31.650 00:10:31.650 real 0m23.601s 00:10:31.650 user 1m16.593s 00:10:31.650 sys 0m3.778s 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.650 ************************************ 00:10:31.650 END TEST nvmf_rpc 00:10:31.650 ************************************ 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:31.650 ************************************ 00:10:31.650 START TEST nvmf_invalid 00:10:31.650 ************************************ 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:31.650 * Looking for test storage... 00:10:31.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:31.650 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:10:31.651 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:10:33.030 Found 0000:08:00.0 (0x8086 - 0x159b) 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:10:33.030 Found 0000:08:00.1 (0x8086 - 0x159b) 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:10:33.030 Found net devices under 0000:08:00.0: cvl_0_0 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:33.030 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:10:33.031 Found net devices under 0000:08:00.1: cvl_0_1 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:33.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:33.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:10:33.031 00:10:33.031 --- 10.0.0.2 ping statistics --- 00:10:33.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.031 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:33.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:33.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:10:33.031 00:10:33.031 --- 10.0.0.1 ping statistics --- 00:10:33.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.031 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2530869 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2530869 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2530869 ']' 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:33.031 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:33.031 [2024-07-24 19:07:39.013374] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:10:33.031 [2024-07-24 19:07:39.013472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.290 EAL: No free 2048 kB hugepages reported on node 1 00:10:33.290 [2024-07-24 19:07:39.080043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.290 [2024-07-24 19:07:39.200778] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.290 [2024-07-24 19:07:39.200831] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.290 [2024-07-24 19:07:39.200847] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:33.290 [2024-07-24 19:07:39.200860] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:33.290 [2024-07-24 19:07:39.200873] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.290 [2024-07-24 19:07:39.200954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.290 [2024-07-24 19:07:39.201010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.290 [2024-07-24 19:07:39.201044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.290 [2024-07-24 19:07:39.201046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.547 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:33.547 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:10:33.547 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:33.547 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:33.547 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:33.547 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.547 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:33.547 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11465 00:10:33.806 [2024-07-24 19:07:39.632646] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:33.806 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:33.806 { 00:10:33.806 "nqn": "nqn.2016-06.io.spdk:cnode11465", 00:10:33.806 "tgt_name": "foobar", 00:10:33.806 "method": "nvmf_create_subsystem", 00:10:33.806 "req_id": 1 00:10:33.806 } 00:10:33.806 Got JSON-RPC error response 00:10:33.806 response: 00:10:33.806 { 00:10:33.806 "code": -32603, 00:10:33.806 "message": "Unable to find target foobar" 00:10:33.806 }' 00:10:33.806 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:33.806 { 00:10:33.806 "nqn": "nqn.2016-06.io.spdk:cnode11465", 00:10:33.806 "tgt_name": "foobar", 00:10:33.806 "method": "nvmf_create_subsystem", 00:10:33.806 "req_id": 1 00:10:33.806 } 00:10:33.806 Got JSON-RPC error response 00:10:33.806 response: 00:10:33.806 { 00:10:33.806 "code": -32603, 00:10:33.806 "message": "Unable to find target foobar" 00:10:33.806 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:33.806 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:33.806 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18055 00:10:34.064 [2024-07-24 19:07:39.937685] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18055: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:34.064 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:34.064 { 00:10:34.064 "nqn": "nqn.2016-06.io.spdk:cnode18055", 00:10:34.064 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:34.064 "method": "nvmf_create_subsystem", 00:10:34.064 "req_id": 1 00:10:34.064 } 00:10:34.064 Got JSON-RPC error response 00:10:34.064 response: 00:10:34.064 { 00:10:34.064 "code": -32602, 00:10:34.064 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:34.064 }' 00:10:34.064 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:34.064 { 00:10:34.064 "nqn": "nqn.2016-06.io.spdk:cnode18055", 00:10:34.064 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:34.064 "method": "nvmf_create_subsystem", 00:10:34.064 "req_id": 1 00:10:34.064 } 00:10:34.064 Got JSON-RPC error response 00:10:34.064 response: 00:10:34.064 { 00:10:34.064 "code": -32602, 00:10:34.064 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:34.064 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:34.064 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:34.064 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13857 00:10:34.323 [2024-07-24 19:07:40.242740] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13857: invalid model number 'SPDK_Controller' 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:34.323 { 00:10:34.323 "nqn": "nqn.2016-06.io.spdk:cnode13857", 00:10:34.323 "model_number": "SPDK_Controller\u001f", 00:10:34.323 "method": "nvmf_create_subsystem", 00:10:34.323 "req_id": 1 00:10:34.323 } 00:10:34.323 Got JSON-RPC error response 00:10:34.323 response: 00:10:34.323 { 00:10:34.323 "code": -32602, 00:10:34.323 "message": "Invalid MN SPDK_Controller\u001f" 00:10:34.323 }' 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:34.323 { 00:10:34.323 "nqn": "nqn.2016-06.io.spdk:cnode13857", 00:10:34.323 "model_number": "SPDK_Controller\u001f", 00:10:34.323 "method": "nvmf_create_subsystem", 00:10:34.323 "req_id": 1 00:10:34.323 } 00:10:34.323 Got JSON-RPC error response 00:10:34.323 response: 00:10:34.323 { 00:10:34.323 "code": -32602, 00:10:34.323 "message": "Invalid MN SPDK_Controller\u001f" 00:10:34.323 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.323 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 2 == \- ]] 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '2LiMV9b9ngK27(T7dR>@' 00:10:34.324 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '2LiMV9b9ngK27(T7dR>@' nqn.2016-06.io.spdk:cnode30559 00:10:34.891 [2024-07-24 19:07:40.603877] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30559: invalid serial number '2LiMV9b9ngK27(T7dR>@' 00:10:34.891 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:34.891 { 00:10:34.891 "nqn": "nqn.2016-06.io.spdk:cnode30559", 00:10:34.891 "serial_number": "2LiMV9b9ngK27(T7dR>@\u007f", 00:10:34.891 "method": "nvmf_create_subsystem", 00:10:34.891 "req_id": 1 00:10:34.891 } 00:10:34.891 Got JSON-RPC error response 00:10:34.892 response: 00:10:34.892 { 00:10:34.892 "code": -32602, 00:10:34.892 "message": "Invalid SN 2LiMV9b9ngK27(T7dR>@\u007f" 00:10:34.892 }' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:34.892 { 00:10:34.892 "nqn": "nqn.2016-06.io.spdk:cnode30559", 00:10:34.892 "serial_number": "2LiMV9b9ngK27(T7dR>@\u007f", 00:10:34.892 "method": "nvmf_create_subsystem", 00:10:34.892 "req_id": 1 00:10:34.892 } 00:10:34.892 Got JSON-RPC error response 00:10:34.892 response: 00:10:34.892 { 00:10:34.892 "code": -32602, 00:10:34.892 "message": "Invalid SN 2LiMV9b9ngK27(T7dR>@\u007f" 00:10:34.892 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:10:34.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.893 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.894 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:34.894 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:34.894 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:34.894 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.894 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.894 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ M == \- ]] 00:10:34.894 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'MA:,0$a;|w\G`n%xa-'\''c}e)d)h8=&Zwj`rkjNpg#M' 00:10:34.894 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'MA:,0$a;|w\G`n%xa-'\''c}e)d)h8=&Zwj`rkjNpg#M' nqn.2016-06.io.spdk:cnode13755 00:10:35.152 [2024-07-24 19:07:41.057366] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13755: invalid model number 'MA:,0$a;|w\G`n%xa-'c}e)d)h8=&Zwj`rkjNpg#M' 00:10:35.152 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:10:35.152 { 00:10:35.152 "nqn": "nqn.2016-06.io.spdk:cnode13755", 00:10:35.152 "model_number": "MA:,0$a;|w\\G`n%xa-'\''c}e)d)h8=&Zwj`rkjNpg#M", 00:10:35.152 "method": "nvmf_create_subsystem", 00:10:35.152 "req_id": 1 00:10:35.152 } 00:10:35.152 Got JSON-RPC error response 00:10:35.152 response: 00:10:35.152 { 00:10:35.152 "code": -32602, 00:10:35.152 "message": "Invalid MN MA:,0$a;|w\\G`n%xa-'\''c}e)d)h8=&Zwj`rkjNpg#M" 00:10:35.152 }' 00:10:35.152 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:10:35.152 { 00:10:35.152 "nqn": "nqn.2016-06.io.spdk:cnode13755", 00:10:35.152 "model_number": "MA:,0$a;|w\\G`n%xa-'c}e)d)h8=&Zwj`rkjNpg#M", 00:10:35.152 "method": "nvmf_create_subsystem", 00:10:35.152 "req_id": 1 00:10:35.152 } 00:10:35.152 Got JSON-RPC error response 00:10:35.152 response: 00:10:35.152 { 00:10:35.152 "code": -32602, 00:10:35.152 "message": "Invalid MN MA:,0$a;|w\\G`n%xa-'c}e)d)h8=&Zwj`rkjNpg#M" 00:10:35.152 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:35.152 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:35.411 [2024-07-24 19:07:41.362473] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:35.411 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:35.976 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:35.976 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:10:35.976 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:35.976 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:10:35.976 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:35.976 [2024-07-24 19:07:41.960370] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:35.976 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:10:35.976 { 00:10:35.976 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:35.976 "listen_address": { 00:10:35.976 "trtype": "tcp", 00:10:35.976 "traddr": "", 00:10:35.976 "trsvcid": "4421" 00:10:35.976 }, 00:10:35.976 "method": "nvmf_subsystem_remove_listener", 00:10:35.976 "req_id": 1 00:10:35.976 } 00:10:35.976 Got JSON-RPC error response 00:10:35.976 response: 00:10:35.976 { 00:10:35.976 "code": -32602, 00:10:35.976 "message": "Invalid parameters" 00:10:35.976 }' 00:10:35.976 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:10:35.976 { 00:10:35.976 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:35.977 "listen_address": { 00:10:35.977 "trtype": "tcp", 00:10:35.977 "traddr": "", 00:10:35.977 "trsvcid": "4421" 00:10:35.977 }, 00:10:35.977 "method": "nvmf_subsystem_remove_listener", 00:10:35.977 "req_id": 1 00:10:35.977 } 00:10:35.977 Got JSON-RPC error response 00:10:35.977 response: 00:10:35.977 { 00:10:35.977 "code": -32602, 00:10:35.977 "message": "Invalid parameters" 00:10:35.977 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:35.977 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14090 -i 0 00:10:36.234 [2024-07-24 19:07:42.205133] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14090: invalid cntlid range [0-65519] 00:10:36.234 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:10:36.234 { 00:10:36.234 "nqn": "nqn.2016-06.io.spdk:cnode14090", 00:10:36.234 "min_cntlid": 0, 00:10:36.234 "method": "nvmf_create_subsystem", 00:10:36.234 "req_id": 1 00:10:36.234 } 00:10:36.234 Got JSON-RPC error response 00:10:36.234 response: 00:10:36.234 { 00:10:36.234 "code": -32602, 00:10:36.234 "message": "Invalid cntlid range [0-65519]" 00:10:36.234 }' 00:10:36.234 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:10:36.234 { 00:10:36.234 "nqn": "nqn.2016-06.io.spdk:cnode14090", 00:10:36.234 "min_cntlid": 0, 00:10:36.234 "method": "nvmf_create_subsystem", 00:10:36.234 "req_id": 1 00:10:36.234 } 00:10:36.234 Got JSON-RPC error response 00:10:36.234 response: 00:10:36.234 { 00:10:36.234 "code": -32602, 00:10:36.234 "message": "Invalid cntlid range [0-65519]" 00:10:36.234 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:36.234 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12579 -i 65520 00:10:36.492 [2024-07-24 19:07:42.461927] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12579: invalid cntlid range [65520-65519] 00:10:36.492 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:10:36.492 { 00:10:36.492 "nqn": "nqn.2016-06.io.spdk:cnode12579", 00:10:36.492 "min_cntlid": 65520, 00:10:36.492 "method": "nvmf_create_subsystem", 00:10:36.492 "req_id": 1 00:10:36.492 } 00:10:36.492 Got JSON-RPC error response 00:10:36.492 response: 00:10:36.492 { 00:10:36.492 "code": -32602, 00:10:36.492 "message": "Invalid cntlid range [65520-65519]" 00:10:36.492 }' 00:10:36.492 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:10:36.492 { 00:10:36.492 "nqn": "nqn.2016-06.io.spdk:cnode12579", 00:10:36.492 "min_cntlid": 65520, 00:10:36.492 "method": "nvmf_create_subsystem", 00:10:36.492 "req_id": 1 00:10:36.492 } 00:10:36.492 Got JSON-RPC error response 00:10:36.492 response: 00:10:36.492 { 00:10:36.492 "code": -32602, 00:10:36.492 "message": "Invalid cntlid range [65520-65519]" 00:10:36.492 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:36.492 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19741 -I 0 00:10:36.750 [2024-07-24 19:07:42.710755] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19741: invalid cntlid range [1-0] 00:10:36.750 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:10:36.750 { 00:10:36.750 "nqn": "nqn.2016-06.io.spdk:cnode19741", 00:10:36.750 "max_cntlid": 0, 00:10:36.750 "method": "nvmf_create_subsystem", 00:10:36.750 "req_id": 1 00:10:36.750 } 00:10:36.750 Got JSON-RPC error response 00:10:36.750 response: 00:10:36.750 { 00:10:36.750 "code": -32602, 00:10:36.750 "message": "Invalid cntlid range [1-0]" 00:10:36.750 }' 00:10:36.750 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:10:36.750 { 00:10:36.750 "nqn": "nqn.2016-06.io.spdk:cnode19741", 00:10:36.750 "max_cntlid": 0, 00:10:36.750 "method": "nvmf_create_subsystem", 00:10:36.750 "req_id": 1 00:10:36.750 } 00:10:36.750 Got JSON-RPC error response 00:10:36.750 response: 00:10:36.750 { 00:10:36.750 "code": -32602, 00:10:36.750 "message": "Invalid cntlid range [1-0]" 00:10:36.750 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:36.750 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3378 -I 65520 00:10:37.009 [2024-07-24 19:07:42.963575] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3378: invalid cntlid range [1-65520] 00:10:37.009 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:10:37.009 { 00:10:37.009 "nqn": "nqn.2016-06.io.spdk:cnode3378", 00:10:37.009 "max_cntlid": 65520, 00:10:37.009 "method": "nvmf_create_subsystem", 00:10:37.009 "req_id": 1 00:10:37.009 } 00:10:37.009 Got JSON-RPC error response 00:10:37.009 response: 00:10:37.009 { 00:10:37.009 "code": -32602, 00:10:37.009 "message": "Invalid cntlid range [1-65520]" 00:10:37.009 }' 00:10:37.009 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:10:37.009 { 00:10:37.009 "nqn": "nqn.2016-06.io.spdk:cnode3378", 00:10:37.009 "max_cntlid": 65520, 00:10:37.009 "method": "nvmf_create_subsystem", 00:10:37.009 "req_id": 1 00:10:37.009 } 00:10:37.009 Got JSON-RPC error response 00:10:37.009 response: 00:10:37.009 { 00:10:37.009 "code": -32602, 00:10:37.009 "message": "Invalid cntlid range [1-65520]" 00:10:37.009 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:37.009 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23431 -i 6 -I 5 00:10:37.266 [2024-07-24 19:07:43.204346] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23431: invalid cntlid range [6-5] 00:10:37.266 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:10:37.266 { 00:10:37.266 "nqn": "nqn.2016-06.io.spdk:cnode23431", 00:10:37.266 "min_cntlid": 6, 00:10:37.266 "max_cntlid": 5, 00:10:37.266 "method": "nvmf_create_subsystem", 00:10:37.266 "req_id": 1 00:10:37.266 } 00:10:37.266 Got JSON-RPC error response 00:10:37.266 response: 00:10:37.266 { 00:10:37.266 "code": -32602, 00:10:37.266 "message": "Invalid cntlid range [6-5]" 00:10:37.266 }' 00:10:37.266 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:10:37.266 { 00:10:37.266 "nqn": "nqn.2016-06.io.spdk:cnode23431", 00:10:37.266 "min_cntlid": 6, 00:10:37.266 "max_cntlid": 5, 00:10:37.266 "method": "nvmf_create_subsystem", 00:10:37.266 "req_id": 1 00:10:37.266 } 00:10:37.266 Got JSON-RPC error response 00:10:37.266 response: 00:10:37.266 { 00:10:37.266 "code": -32602, 00:10:37.266 "message": "Invalid cntlid range [6-5]" 00:10:37.266 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:37.266 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:37.532 { 00:10:37.532 "name": "foobar", 00:10:37.532 "method": "nvmf_delete_target", 00:10:37.532 "req_id": 1 00:10:37.532 } 00:10:37.532 Got JSON-RPC error response 00:10:37.532 response: 00:10:37.532 { 00:10:37.532 "code": -32602, 00:10:37.532 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:37.532 }' 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:37.532 { 00:10:37.532 "name": "foobar", 00:10:37.532 "method": "nvmf_delete_target", 00:10:37.532 "req_id": 1 00:10:37.532 } 00:10:37.532 Got JSON-RPC error response 00:10:37.532 response: 00:10:37.532 { 00:10:37.532 "code": -32602, 00:10:37.532 "message": "The specified target doesn't exist, cannot delete it." 00:10:37.532 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:37.532 rmmod nvme_tcp 00:10:37.532 rmmod nvme_fabrics 00:10:37.532 rmmod nvme_keyring 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2530869 ']' 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2530869 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 2530869 ']' 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 2530869 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2530869 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2530869' 00:10:37.532 killing process with pid 2530869 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 2530869 00:10:37.532 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 2530869 00:10:37.802 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:37.802 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:37.802 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:37.802 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:37.802 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:37.802 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.802 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.802 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.739 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:39.739 00:10:39.739 real 0m8.515s 00:10:39.739 user 0m21.434s 00:10:39.739 sys 0m2.174s 00:10:39.739 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.739 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:39.739 ************************************ 00:10:39.739 END TEST nvmf_invalid 00:10:39.739 ************************************ 00:10:39.739 19:07:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:39.739 19:07:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:39.739 19:07:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.739 19:07:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:39.739 ************************************ 00:10:39.739 START TEST nvmf_connect_stress 00:10:39.739 ************************************ 00:10:39.739 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:39.998 * Looking for test storage... 00:10:39.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.998 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:39.998 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:39.998 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:39.998 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:39.998 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:39.998 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:39.998 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:39.998 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:39.998 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:39.998 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:39.998 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:39.998 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:39.999 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:10:41.910 Found 0000:08:00.0 (0x8086 - 0x159b) 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:10:41.910 Found 0000:08:00.1 (0x8086 - 0x159b) 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:10:41.910 Found net devices under 0000:08:00.0: cvl_0_0 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.910 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:10:41.910 Found net devices under 0000:08:00.1: cvl_0_1 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:41.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:10:41.911 00:10:41.911 --- 10.0.0.2 ping statistics --- 00:10:41.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.911 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:41.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:10:41.911 00:10:41.911 --- 10.0.0.1 ping statistics --- 00:10:41.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.911 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2532930 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2532930 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2532930 ']' 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:41.911 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:41.911 [2024-07-24 19:07:47.707370] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:10:41.911 [2024-07-24 19:07:47.707466] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.911 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.911 [2024-07-24 19:07:47.772938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:41.911 [2024-07-24 19:07:47.889417] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.911 [2024-07-24 19:07:47.889493] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.911 [2024-07-24 19:07:47.889511] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.911 [2024-07-24 19:07:47.889545] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.911 [2024-07-24 19:07:47.889558] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.911 [2024-07-24 19:07:47.889650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.911 [2024-07-24 19:07:47.889737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.911 [2024-07-24 19:07:47.889771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.169 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:42.169 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:10:42.169 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:42.169 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:42.169 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:42.169 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.169 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:42.169 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.169 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:42.169 [2024-07-24 19:07:48.017432] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.169 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.169 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:42.169 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.169 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:42.169 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.169 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:42.169 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.169 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:42.169 [2024-07-24 19:07:48.043699] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:42.169 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.169 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:42.169 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.169 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:42.169 NULL1 00:10:42.169 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.169 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2533045 00:10:42.169 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:42.169 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:42.170 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.170 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:42.427 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.427 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:42.427 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:42.427 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.428 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:42.993 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.993 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:42.993 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:42.993 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.993 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:43.251 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.251 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:43.251 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:43.251 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.251 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:43.508 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.508 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:43.508 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:43.508 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.508 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:43.765 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.765 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:43.765 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:43.765 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.765 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.024 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.024 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:44.024 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.024 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.024 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.592 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.592 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:44.592 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.592 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.592 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.852 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.852 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:44.852 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.852 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.852 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.111 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.111 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:45.111 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.111 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.111 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.370 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.370 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:45.370 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.370 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.370 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.630 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.630 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:45.630 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.630 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.630 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.197 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.197 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:46.197 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.197 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.197 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.456 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.456 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:46.456 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.456 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.456 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.716 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.716 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:46.716 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.716 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.716 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.974 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.974 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:46.974 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.974 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.974 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.232 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.232 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:47.232 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.232 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.232 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.798 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.798 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:47.798 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.798 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.798 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.054 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.054 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:48.054 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.054 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.055 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.312 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.312 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:48.312 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.312 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.312 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.572 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.572 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:48.572 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.572 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.572 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.143 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.143 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:49.143 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.143 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.143 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.403 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.403 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:49.403 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.403 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.403 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.661 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.661 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:49.661 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.661 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.661 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.921 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.921 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:49.921 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.921 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.921 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.181 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.181 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:50.181 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.181 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.181 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.751 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.751 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:50.751 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.751 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.751 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.010 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.010 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:51.010 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.010 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.010 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.270 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.270 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:51.270 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.270 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.270 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.530 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.530 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:51.530 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.530 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.530 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.790 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.790 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:51.790 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.790 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.790 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.050 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.050 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:52.050 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.050 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.050 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.310 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2533045 00:10:52.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2533045) - No such process 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2533045 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:52.570 rmmod nvme_tcp 00:10:52.570 rmmod nvme_fabrics 00:10:52.570 rmmod nvme_keyring 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2532930 ']' 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2532930 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2532930 ']' 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2532930 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2532930 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2532930' 00:10:52.570 killing process with pid 2532930 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2532930 00:10:52.570 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2532930 00:10:52.830 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:52.830 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:52.830 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:52.830 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:52.830 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:52.830 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.830 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.830 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.740 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:54.740 00:10:54.740 real 0m15.007s 00:10:54.740 user 0m38.323s 00:10:54.740 sys 0m5.483s 00:10:54.740 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:54.740 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.740 ************************************ 00:10:54.740 END TEST nvmf_connect_stress 00:10:54.740 ************************************ 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:54.999 ************************************ 00:10:54.999 START TEST nvmf_fused_ordering 00:10:54.999 ************************************ 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:54.999 * Looking for test storage... 00:10:54.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.999 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:55.000 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:10:56.909 Found 0000:08:00.0 (0x8086 - 0x159b) 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:56.909 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:10:56.909 Found 0000:08:00.1 (0x8086 - 0x159b) 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:10:56.910 Found net devices under 0000:08:00.0: cvl_0_0 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:10:56.910 Found net devices under 0000:08:00.1: cvl_0_1 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:56.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:10:56.910 00:10:56.910 --- 10.0.0.2 ping statistics --- 00:10:56.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.910 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:56.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:10:56.910 00:10:56.910 --- 10.0.0.1 ping statistics --- 00:10:56.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.910 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2535463 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2535463 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2535463 ']' 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:56.910 [2024-07-24 19:08:02.614053] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:10:56.910 [2024-07-24 19:08:02.614159] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.910 EAL: No free 2048 kB hugepages reported on node 1 00:10:56.910 [2024-07-24 19:08:02.680704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.910 [2024-07-24 19:08:02.799270] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.910 [2024-07-24 19:08:02.799339] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.910 [2024-07-24 19:08:02.799355] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.910 [2024-07-24 19:08:02.799368] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.910 [2024-07-24 19:08:02.799380] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.910 [2024-07-24 19:08:02.799417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:56.910 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:56.911 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:57.170 [2024-07-24 19:08:02.939807] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:57.170 [2024-07-24 19:08:02.955981] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:57.170 NULL1 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.170 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:57.170 [2024-07-24 19:08:03.002869] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:10:57.170 [2024-07-24 19:08:03.002918] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2535483 ] 00:10:57.170 EAL: No free 2048 kB hugepages reported on node 1 00:10:57.739 Attached to nqn.2016-06.io.spdk:cnode1 00:10:57.739 Namespace ID: 1 size: 1GB 00:10:57.739 fused_ordering(0) 00:10:57.739 fused_ordering(1) 00:10:57.739 fused_ordering(2) 00:10:57.739 fused_ordering(3) 00:10:57.739 fused_ordering(4) 00:10:57.739 fused_ordering(5) 00:10:57.739 fused_ordering(6) 00:10:57.739 fused_ordering(7) 00:10:57.739 fused_ordering(8) 00:10:57.739 fused_ordering(9) 00:10:57.739 fused_ordering(10) 00:10:57.739 fused_ordering(11) 00:10:57.739 fused_ordering(12) 00:10:57.739 fused_ordering(13) 00:10:57.739 fused_ordering(14) 00:10:57.739 fused_ordering(15) 00:10:57.739 fused_ordering(16) 00:10:57.739 fused_ordering(17) 00:10:57.739 fused_ordering(18) 00:10:57.739 fused_ordering(19) 00:10:57.739 fused_ordering(20) 00:10:57.739 fused_ordering(21) 00:10:57.739 fused_ordering(22) 00:10:57.739 fused_ordering(23) 00:10:57.739 fused_ordering(24) 00:10:57.739 fused_ordering(25) 00:10:57.739 fused_ordering(26) 00:10:57.739 fused_ordering(27) 00:10:57.739 fused_ordering(28) 00:10:57.739 fused_ordering(29) 00:10:57.739 fused_ordering(30) 00:10:57.739 fused_ordering(31) 00:10:57.739 fused_ordering(32) 00:10:57.739 fused_ordering(33) 00:10:57.739 fused_ordering(34) 00:10:57.739 fused_ordering(35) 00:10:57.739 fused_ordering(36) 00:10:57.739 fused_ordering(37) 00:10:57.739 fused_ordering(38) 00:10:57.739 fused_ordering(39) 00:10:57.739 fused_ordering(40) 00:10:57.739 fused_ordering(41) 00:10:57.739 fused_ordering(42) 00:10:57.739 fused_ordering(43) 00:10:57.739 fused_ordering(44) 00:10:57.739 fused_ordering(45) 00:10:57.739 fused_ordering(46) 00:10:57.739 fused_ordering(47) 00:10:57.739 fused_ordering(48) 00:10:57.739 fused_ordering(49) 00:10:57.739 fused_ordering(50) 00:10:57.739 fused_ordering(51) 00:10:57.739 fused_ordering(52) 00:10:57.739 fused_ordering(53) 00:10:57.739 fused_ordering(54) 00:10:57.739 fused_ordering(55) 00:10:57.739 fused_ordering(56) 00:10:57.739 fused_ordering(57) 00:10:57.739 fused_ordering(58) 00:10:57.739 fused_ordering(59) 00:10:57.739 fused_ordering(60) 00:10:57.739 fused_ordering(61) 00:10:57.739 fused_ordering(62) 00:10:57.739 fused_ordering(63) 00:10:57.739 fused_ordering(64) 00:10:57.739 fused_ordering(65) 00:10:57.739 fused_ordering(66) 00:10:57.739 fused_ordering(67) 00:10:57.739 fused_ordering(68) 00:10:57.739 fused_ordering(69) 00:10:57.739 fused_ordering(70) 00:10:57.739 fused_ordering(71) 00:10:57.739 fused_ordering(72) 00:10:57.739 fused_ordering(73) 00:10:57.739 fused_ordering(74) 00:10:57.739 fused_ordering(75) 00:10:57.739 fused_ordering(76) 00:10:57.739 fused_ordering(77) 00:10:57.739 fused_ordering(78) 00:10:57.739 fused_ordering(79) 00:10:57.739 fused_ordering(80) 00:10:57.739 fused_ordering(81) 00:10:57.739 fused_ordering(82) 00:10:57.739 fused_ordering(83) 00:10:57.739 fused_ordering(84) 00:10:57.739 fused_ordering(85) 00:10:57.739 fused_ordering(86) 00:10:57.739 fused_ordering(87) 00:10:57.739 fused_ordering(88) 00:10:57.739 fused_ordering(89) 00:10:57.739 fused_ordering(90) 00:10:57.739 fused_ordering(91) 00:10:57.739 fused_ordering(92) 00:10:57.739 fused_ordering(93) 00:10:57.739 fused_ordering(94) 00:10:57.739 fused_ordering(95) 00:10:57.739 fused_ordering(96) 00:10:57.739 fused_ordering(97) 00:10:57.739 fused_ordering(98) 00:10:57.739 fused_ordering(99) 00:10:57.739 fused_ordering(100) 00:10:57.739 fused_ordering(101) 00:10:57.739 fused_ordering(102) 00:10:57.739 fused_ordering(103) 00:10:57.739 fused_ordering(104) 00:10:57.739 fused_ordering(105) 00:10:57.739 fused_ordering(106) 00:10:57.739 fused_ordering(107) 00:10:57.739 fused_ordering(108) 00:10:57.739 fused_ordering(109) 00:10:57.739 fused_ordering(110) 00:10:57.739 fused_ordering(111) 00:10:57.739 fused_ordering(112) 00:10:57.739 fused_ordering(113) 00:10:57.739 fused_ordering(114) 00:10:57.739 fused_ordering(115) 00:10:57.739 fused_ordering(116) 00:10:57.739 fused_ordering(117) 00:10:57.739 fused_ordering(118) 00:10:57.739 fused_ordering(119) 00:10:57.739 fused_ordering(120) 00:10:57.739 fused_ordering(121) 00:10:57.739 fused_ordering(122) 00:10:57.739 fused_ordering(123) 00:10:57.739 fused_ordering(124) 00:10:57.739 fused_ordering(125) 00:10:57.739 fused_ordering(126) 00:10:57.739 fused_ordering(127) 00:10:57.739 fused_ordering(128) 00:10:57.739 fused_ordering(129) 00:10:57.739 fused_ordering(130) 00:10:57.739 fused_ordering(131) 00:10:57.739 fused_ordering(132) 00:10:57.739 fused_ordering(133) 00:10:57.739 fused_ordering(134) 00:10:57.739 fused_ordering(135) 00:10:57.739 fused_ordering(136) 00:10:57.739 fused_ordering(137) 00:10:57.739 fused_ordering(138) 00:10:57.739 fused_ordering(139) 00:10:57.739 fused_ordering(140) 00:10:57.739 fused_ordering(141) 00:10:57.739 fused_ordering(142) 00:10:57.739 fused_ordering(143) 00:10:57.739 fused_ordering(144) 00:10:57.739 fused_ordering(145) 00:10:57.739 fused_ordering(146) 00:10:57.739 fused_ordering(147) 00:10:57.739 fused_ordering(148) 00:10:57.739 fused_ordering(149) 00:10:57.739 fused_ordering(150) 00:10:57.739 fused_ordering(151) 00:10:57.740 fused_ordering(152) 00:10:57.740 fused_ordering(153) 00:10:57.740 fused_ordering(154) 00:10:57.740 fused_ordering(155) 00:10:57.740 fused_ordering(156) 00:10:57.740 fused_ordering(157) 00:10:57.740 fused_ordering(158) 00:10:57.740 fused_ordering(159) 00:10:57.740 fused_ordering(160) 00:10:57.740 fused_ordering(161) 00:10:57.740 fused_ordering(162) 00:10:57.740 fused_ordering(163) 00:10:57.740 fused_ordering(164) 00:10:57.740 fused_ordering(165) 00:10:57.740 fused_ordering(166) 00:10:57.740 fused_ordering(167) 00:10:57.740 fused_ordering(168) 00:10:57.740 fused_ordering(169) 00:10:57.740 fused_ordering(170) 00:10:57.740 fused_ordering(171) 00:10:57.740 fused_ordering(172) 00:10:57.740 fused_ordering(173) 00:10:57.740 fused_ordering(174) 00:10:57.740 fused_ordering(175) 00:10:57.740 fused_ordering(176) 00:10:57.740 fused_ordering(177) 00:10:57.740 fused_ordering(178) 00:10:57.740 fused_ordering(179) 00:10:57.740 fused_ordering(180) 00:10:57.740 fused_ordering(181) 00:10:57.740 fused_ordering(182) 00:10:57.740 fused_ordering(183) 00:10:57.740 fused_ordering(184) 00:10:57.740 fused_ordering(185) 00:10:57.740 fused_ordering(186) 00:10:57.740 fused_ordering(187) 00:10:57.740 fused_ordering(188) 00:10:57.740 fused_ordering(189) 00:10:57.740 fused_ordering(190) 00:10:57.740 fused_ordering(191) 00:10:57.740 fused_ordering(192) 00:10:57.740 fused_ordering(193) 00:10:57.740 fused_ordering(194) 00:10:57.740 fused_ordering(195) 00:10:57.740 fused_ordering(196) 00:10:57.740 fused_ordering(197) 00:10:57.740 fused_ordering(198) 00:10:57.740 fused_ordering(199) 00:10:57.740 fused_ordering(200) 00:10:57.740 fused_ordering(201) 00:10:57.740 fused_ordering(202) 00:10:57.740 fused_ordering(203) 00:10:57.740 fused_ordering(204) 00:10:57.740 fused_ordering(205) 00:10:58.000 fused_ordering(206) 00:10:58.000 fused_ordering(207) 00:10:58.000 fused_ordering(208) 00:10:58.000 fused_ordering(209) 00:10:58.000 fused_ordering(210) 00:10:58.000 fused_ordering(211) 00:10:58.000 fused_ordering(212) 00:10:58.000 fused_ordering(213) 00:10:58.000 fused_ordering(214) 00:10:58.000 fused_ordering(215) 00:10:58.000 fused_ordering(216) 00:10:58.000 fused_ordering(217) 00:10:58.000 fused_ordering(218) 00:10:58.000 fused_ordering(219) 00:10:58.000 fused_ordering(220) 00:10:58.000 fused_ordering(221) 00:10:58.000 fused_ordering(222) 00:10:58.000 fused_ordering(223) 00:10:58.000 fused_ordering(224) 00:10:58.000 fused_ordering(225) 00:10:58.000 fused_ordering(226) 00:10:58.000 fused_ordering(227) 00:10:58.000 fused_ordering(228) 00:10:58.000 fused_ordering(229) 00:10:58.000 fused_ordering(230) 00:10:58.000 fused_ordering(231) 00:10:58.000 fused_ordering(232) 00:10:58.000 fused_ordering(233) 00:10:58.000 fused_ordering(234) 00:10:58.000 fused_ordering(235) 00:10:58.000 fused_ordering(236) 00:10:58.000 fused_ordering(237) 00:10:58.000 fused_ordering(238) 00:10:58.000 fused_ordering(239) 00:10:58.000 fused_ordering(240) 00:10:58.000 fused_ordering(241) 00:10:58.000 fused_ordering(242) 00:10:58.000 fused_ordering(243) 00:10:58.000 fused_ordering(244) 00:10:58.000 fused_ordering(245) 00:10:58.000 fused_ordering(246) 00:10:58.000 fused_ordering(247) 00:10:58.000 fused_ordering(248) 00:10:58.000 fused_ordering(249) 00:10:58.000 fused_ordering(250) 00:10:58.000 fused_ordering(251) 00:10:58.000 fused_ordering(252) 00:10:58.000 fused_ordering(253) 00:10:58.000 fused_ordering(254) 00:10:58.000 fused_ordering(255) 00:10:58.000 fused_ordering(256) 00:10:58.000 fused_ordering(257) 00:10:58.000 fused_ordering(258) 00:10:58.000 fused_ordering(259) 00:10:58.000 fused_ordering(260) 00:10:58.000 fused_ordering(261) 00:10:58.000 fused_ordering(262) 00:10:58.000 fused_ordering(263) 00:10:58.000 fused_ordering(264) 00:10:58.000 fused_ordering(265) 00:10:58.000 fused_ordering(266) 00:10:58.000 fused_ordering(267) 00:10:58.000 fused_ordering(268) 00:10:58.000 fused_ordering(269) 00:10:58.000 fused_ordering(270) 00:10:58.000 fused_ordering(271) 00:10:58.000 fused_ordering(272) 00:10:58.000 fused_ordering(273) 00:10:58.000 fused_ordering(274) 00:10:58.000 fused_ordering(275) 00:10:58.000 fused_ordering(276) 00:10:58.000 fused_ordering(277) 00:10:58.000 fused_ordering(278) 00:10:58.000 fused_ordering(279) 00:10:58.000 fused_ordering(280) 00:10:58.001 fused_ordering(281) 00:10:58.001 fused_ordering(282) 00:10:58.001 fused_ordering(283) 00:10:58.001 fused_ordering(284) 00:10:58.001 fused_ordering(285) 00:10:58.001 fused_ordering(286) 00:10:58.001 fused_ordering(287) 00:10:58.001 fused_ordering(288) 00:10:58.001 fused_ordering(289) 00:10:58.001 fused_ordering(290) 00:10:58.001 fused_ordering(291) 00:10:58.001 fused_ordering(292) 00:10:58.001 fused_ordering(293) 00:10:58.001 fused_ordering(294) 00:10:58.001 fused_ordering(295) 00:10:58.001 fused_ordering(296) 00:10:58.001 fused_ordering(297) 00:10:58.001 fused_ordering(298) 00:10:58.001 fused_ordering(299) 00:10:58.001 fused_ordering(300) 00:10:58.001 fused_ordering(301) 00:10:58.001 fused_ordering(302) 00:10:58.001 fused_ordering(303) 00:10:58.001 fused_ordering(304) 00:10:58.001 fused_ordering(305) 00:10:58.001 fused_ordering(306) 00:10:58.001 fused_ordering(307) 00:10:58.001 fused_ordering(308) 00:10:58.001 fused_ordering(309) 00:10:58.001 fused_ordering(310) 00:10:58.001 fused_ordering(311) 00:10:58.001 fused_ordering(312) 00:10:58.001 fused_ordering(313) 00:10:58.001 fused_ordering(314) 00:10:58.001 fused_ordering(315) 00:10:58.001 fused_ordering(316) 00:10:58.001 fused_ordering(317) 00:10:58.001 fused_ordering(318) 00:10:58.001 fused_ordering(319) 00:10:58.001 fused_ordering(320) 00:10:58.001 fused_ordering(321) 00:10:58.001 fused_ordering(322) 00:10:58.001 fused_ordering(323) 00:10:58.001 fused_ordering(324) 00:10:58.001 fused_ordering(325) 00:10:58.001 fused_ordering(326) 00:10:58.001 fused_ordering(327) 00:10:58.001 fused_ordering(328) 00:10:58.001 fused_ordering(329) 00:10:58.001 fused_ordering(330) 00:10:58.001 fused_ordering(331) 00:10:58.001 fused_ordering(332) 00:10:58.001 fused_ordering(333) 00:10:58.001 fused_ordering(334) 00:10:58.001 fused_ordering(335) 00:10:58.001 fused_ordering(336) 00:10:58.001 fused_ordering(337) 00:10:58.001 fused_ordering(338) 00:10:58.001 fused_ordering(339) 00:10:58.001 fused_ordering(340) 00:10:58.001 fused_ordering(341) 00:10:58.001 fused_ordering(342) 00:10:58.001 fused_ordering(343) 00:10:58.001 fused_ordering(344) 00:10:58.001 fused_ordering(345) 00:10:58.001 fused_ordering(346) 00:10:58.001 fused_ordering(347) 00:10:58.001 fused_ordering(348) 00:10:58.001 fused_ordering(349) 00:10:58.001 fused_ordering(350) 00:10:58.001 fused_ordering(351) 00:10:58.001 fused_ordering(352) 00:10:58.001 fused_ordering(353) 00:10:58.001 fused_ordering(354) 00:10:58.001 fused_ordering(355) 00:10:58.001 fused_ordering(356) 00:10:58.001 fused_ordering(357) 00:10:58.001 fused_ordering(358) 00:10:58.001 fused_ordering(359) 00:10:58.001 fused_ordering(360) 00:10:58.001 fused_ordering(361) 00:10:58.001 fused_ordering(362) 00:10:58.001 fused_ordering(363) 00:10:58.001 fused_ordering(364) 00:10:58.001 fused_ordering(365) 00:10:58.001 fused_ordering(366) 00:10:58.001 fused_ordering(367) 00:10:58.001 fused_ordering(368) 00:10:58.001 fused_ordering(369) 00:10:58.001 fused_ordering(370) 00:10:58.001 fused_ordering(371) 00:10:58.001 fused_ordering(372) 00:10:58.001 fused_ordering(373) 00:10:58.001 fused_ordering(374) 00:10:58.001 fused_ordering(375) 00:10:58.001 fused_ordering(376) 00:10:58.001 fused_ordering(377) 00:10:58.001 fused_ordering(378) 00:10:58.001 fused_ordering(379) 00:10:58.001 fused_ordering(380) 00:10:58.001 fused_ordering(381) 00:10:58.001 fused_ordering(382) 00:10:58.001 fused_ordering(383) 00:10:58.001 fused_ordering(384) 00:10:58.001 fused_ordering(385) 00:10:58.001 fused_ordering(386) 00:10:58.001 fused_ordering(387) 00:10:58.001 fused_ordering(388) 00:10:58.001 fused_ordering(389) 00:10:58.001 fused_ordering(390) 00:10:58.001 fused_ordering(391) 00:10:58.001 fused_ordering(392) 00:10:58.001 fused_ordering(393) 00:10:58.001 fused_ordering(394) 00:10:58.001 fused_ordering(395) 00:10:58.001 fused_ordering(396) 00:10:58.001 fused_ordering(397) 00:10:58.001 fused_ordering(398) 00:10:58.001 fused_ordering(399) 00:10:58.001 fused_ordering(400) 00:10:58.001 fused_ordering(401) 00:10:58.001 fused_ordering(402) 00:10:58.001 fused_ordering(403) 00:10:58.001 fused_ordering(404) 00:10:58.001 fused_ordering(405) 00:10:58.001 fused_ordering(406) 00:10:58.001 fused_ordering(407) 00:10:58.001 fused_ordering(408) 00:10:58.001 fused_ordering(409) 00:10:58.001 fused_ordering(410) 00:10:58.568 fused_ordering(411) 00:10:58.568 fused_ordering(412) 00:10:58.568 fused_ordering(413) 00:10:58.568 fused_ordering(414) 00:10:58.568 fused_ordering(415) 00:10:58.568 fused_ordering(416) 00:10:58.568 fused_ordering(417) 00:10:58.568 fused_ordering(418) 00:10:58.568 fused_ordering(419) 00:10:58.568 fused_ordering(420) 00:10:58.568 fused_ordering(421) 00:10:58.568 fused_ordering(422) 00:10:58.568 fused_ordering(423) 00:10:58.568 fused_ordering(424) 00:10:58.568 fused_ordering(425) 00:10:58.568 fused_ordering(426) 00:10:58.568 fused_ordering(427) 00:10:58.568 fused_ordering(428) 00:10:58.568 fused_ordering(429) 00:10:58.568 fused_ordering(430) 00:10:58.568 fused_ordering(431) 00:10:58.569 fused_ordering(432) 00:10:58.569 fused_ordering(433) 00:10:58.569 fused_ordering(434) 00:10:58.569 fused_ordering(435) 00:10:58.569 fused_ordering(436) 00:10:58.569 fused_ordering(437) 00:10:58.569 fused_ordering(438) 00:10:58.569 fused_ordering(439) 00:10:58.569 fused_ordering(440) 00:10:58.569 fused_ordering(441) 00:10:58.569 fused_ordering(442) 00:10:58.569 fused_ordering(443) 00:10:58.569 fused_ordering(444) 00:10:58.569 fused_ordering(445) 00:10:58.569 fused_ordering(446) 00:10:58.569 fused_ordering(447) 00:10:58.569 fused_ordering(448) 00:10:58.569 fused_ordering(449) 00:10:58.569 fused_ordering(450) 00:10:58.569 fused_ordering(451) 00:10:58.569 fused_ordering(452) 00:10:58.569 fused_ordering(453) 00:10:58.569 fused_ordering(454) 00:10:58.569 fused_ordering(455) 00:10:58.569 fused_ordering(456) 00:10:58.569 fused_ordering(457) 00:10:58.569 fused_ordering(458) 00:10:58.569 fused_ordering(459) 00:10:58.569 fused_ordering(460) 00:10:58.569 fused_ordering(461) 00:10:58.569 fused_ordering(462) 00:10:58.569 fused_ordering(463) 00:10:58.569 fused_ordering(464) 00:10:58.569 fused_ordering(465) 00:10:58.569 fused_ordering(466) 00:10:58.569 fused_ordering(467) 00:10:58.569 fused_ordering(468) 00:10:58.569 fused_ordering(469) 00:10:58.569 fused_ordering(470) 00:10:58.569 fused_ordering(471) 00:10:58.569 fused_ordering(472) 00:10:58.569 fused_ordering(473) 00:10:58.569 fused_ordering(474) 00:10:58.569 fused_ordering(475) 00:10:58.569 fused_ordering(476) 00:10:58.569 fused_ordering(477) 00:10:58.569 fused_ordering(478) 00:10:58.569 fused_ordering(479) 00:10:58.569 fused_ordering(480) 00:10:58.569 fused_ordering(481) 00:10:58.569 fused_ordering(482) 00:10:58.569 fused_ordering(483) 00:10:58.569 fused_ordering(484) 00:10:58.569 fused_ordering(485) 00:10:58.569 fused_ordering(486) 00:10:58.569 fused_ordering(487) 00:10:58.569 fused_ordering(488) 00:10:58.569 fused_ordering(489) 00:10:58.569 fused_ordering(490) 00:10:58.569 fused_ordering(491) 00:10:58.569 fused_ordering(492) 00:10:58.569 fused_ordering(493) 00:10:58.569 fused_ordering(494) 00:10:58.569 fused_ordering(495) 00:10:58.569 fused_ordering(496) 00:10:58.569 fused_ordering(497) 00:10:58.569 fused_ordering(498) 00:10:58.569 fused_ordering(499) 00:10:58.569 fused_ordering(500) 00:10:58.569 fused_ordering(501) 00:10:58.569 fused_ordering(502) 00:10:58.569 fused_ordering(503) 00:10:58.569 fused_ordering(504) 00:10:58.569 fused_ordering(505) 00:10:58.569 fused_ordering(506) 00:10:58.569 fused_ordering(507) 00:10:58.569 fused_ordering(508) 00:10:58.569 fused_ordering(509) 00:10:58.569 fused_ordering(510) 00:10:58.569 fused_ordering(511) 00:10:58.569 fused_ordering(512) 00:10:58.569 fused_ordering(513) 00:10:58.569 fused_ordering(514) 00:10:58.569 fused_ordering(515) 00:10:58.569 fused_ordering(516) 00:10:58.569 fused_ordering(517) 00:10:58.569 fused_ordering(518) 00:10:58.569 fused_ordering(519) 00:10:58.569 fused_ordering(520) 00:10:58.569 fused_ordering(521) 00:10:58.569 fused_ordering(522) 00:10:58.569 fused_ordering(523) 00:10:58.569 fused_ordering(524) 00:10:58.569 fused_ordering(525) 00:10:58.569 fused_ordering(526) 00:10:58.569 fused_ordering(527) 00:10:58.569 fused_ordering(528) 00:10:58.569 fused_ordering(529) 00:10:58.569 fused_ordering(530) 00:10:58.569 fused_ordering(531) 00:10:58.569 fused_ordering(532) 00:10:58.569 fused_ordering(533) 00:10:58.569 fused_ordering(534) 00:10:58.569 fused_ordering(535) 00:10:58.569 fused_ordering(536) 00:10:58.569 fused_ordering(537) 00:10:58.569 fused_ordering(538) 00:10:58.569 fused_ordering(539) 00:10:58.569 fused_ordering(540) 00:10:58.569 fused_ordering(541) 00:10:58.569 fused_ordering(542) 00:10:58.569 fused_ordering(543) 00:10:58.569 fused_ordering(544) 00:10:58.569 fused_ordering(545) 00:10:58.569 fused_ordering(546) 00:10:58.569 fused_ordering(547) 00:10:58.569 fused_ordering(548) 00:10:58.569 fused_ordering(549) 00:10:58.569 fused_ordering(550) 00:10:58.569 fused_ordering(551) 00:10:58.569 fused_ordering(552) 00:10:58.569 fused_ordering(553) 00:10:58.569 fused_ordering(554) 00:10:58.569 fused_ordering(555) 00:10:58.569 fused_ordering(556) 00:10:58.569 fused_ordering(557) 00:10:58.569 fused_ordering(558) 00:10:58.569 fused_ordering(559) 00:10:58.569 fused_ordering(560) 00:10:58.569 fused_ordering(561) 00:10:58.569 fused_ordering(562) 00:10:58.569 fused_ordering(563) 00:10:58.569 fused_ordering(564) 00:10:58.569 fused_ordering(565) 00:10:58.569 fused_ordering(566) 00:10:58.569 fused_ordering(567) 00:10:58.569 fused_ordering(568) 00:10:58.569 fused_ordering(569) 00:10:58.569 fused_ordering(570) 00:10:58.569 fused_ordering(571) 00:10:58.569 fused_ordering(572) 00:10:58.569 fused_ordering(573) 00:10:58.569 fused_ordering(574) 00:10:58.569 fused_ordering(575) 00:10:58.569 fused_ordering(576) 00:10:58.569 fused_ordering(577) 00:10:58.569 fused_ordering(578) 00:10:58.569 fused_ordering(579) 00:10:58.569 fused_ordering(580) 00:10:58.569 fused_ordering(581) 00:10:58.569 fused_ordering(582) 00:10:58.569 fused_ordering(583) 00:10:58.569 fused_ordering(584) 00:10:58.569 fused_ordering(585) 00:10:58.569 fused_ordering(586) 00:10:58.569 fused_ordering(587) 00:10:58.569 fused_ordering(588) 00:10:58.569 fused_ordering(589) 00:10:58.569 fused_ordering(590) 00:10:58.569 fused_ordering(591) 00:10:58.569 fused_ordering(592) 00:10:58.569 fused_ordering(593) 00:10:58.569 fused_ordering(594) 00:10:58.569 fused_ordering(595) 00:10:58.569 fused_ordering(596) 00:10:58.569 fused_ordering(597) 00:10:58.569 fused_ordering(598) 00:10:58.569 fused_ordering(599) 00:10:58.569 fused_ordering(600) 00:10:58.569 fused_ordering(601) 00:10:58.569 fused_ordering(602) 00:10:58.569 fused_ordering(603) 00:10:58.569 fused_ordering(604) 00:10:58.569 fused_ordering(605) 00:10:58.569 fused_ordering(606) 00:10:58.569 fused_ordering(607) 00:10:58.569 fused_ordering(608) 00:10:58.569 fused_ordering(609) 00:10:58.569 fused_ordering(610) 00:10:58.569 fused_ordering(611) 00:10:58.569 fused_ordering(612) 00:10:58.569 fused_ordering(613) 00:10:58.569 fused_ordering(614) 00:10:58.570 fused_ordering(615) 00:10:59.139 fused_ordering(616) 00:10:59.139 fused_ordering(617) 00:10:59.139 fused_ordering(618) 00:10:59.139 fused_ordering(619) 00:10:59.139 fused_ordering(620) 00:10:59.139 fused_ordering(621) 00:10:59.139 fused_ordering(622) 00:10:59.139 fused_ordering(623) 00:10:59.139 fused_ordering(624) 00:10:59.139 fused_ordering(625) 00:10:59.139 fused_ordering(626) 00:10:59.139 fused_ordering(627) 00:10:59.139 fused_ordering(628) 00:10:59.139 fused_ordering(629) 00:10:59.139 fused_ordering(630) 00:10:59.139 fused_ordering(631) 00:10:59.139 fused_ordering(632) 00:10:59.139 fused_ordering(633) 00:10:59.139 fused_ordering(634) 00:10:59.139 fused_ordering(635) 00:10:59.139 fused_ordering(636) 00:10:59.139 fused_ordering(637) 00:10:59.139 fused_ordering(638) 00:10:59.139 fused_ordering(639) 00:10:59.139 fused_ordering(640) 00:10:59.139 fused_ordering(641) 00:10:59.139 fused_ordering(642) 00:10:59.140 fused_ordering(643) 00:10:59.140 fused_ordering(644) 00:10:59.140 fused_ordering(645) 00:10:59.140 fused_ordering(646) 00:10:59.140 fused_ordering(647) 00:10:59.140 fused_ordering(648) 00:10:59.140 fused_ordering(649) 00:10:59.140 fused_ordering(650) 00:10:59.140 fused_ordering(651) 00:10:59.140 fused_ordering(652) 00:10:59.140 fused_ordering(653) 00:10:59.140 fused_ordering(654) 00:10:59.140 fused_ordering(655) 00:10:59.140 fused_ordering(656) 00:10:59.140 fused_ordering(657) 00:10:59.140 fused_ordering(658) 00:10:59.140 fused_ordering(659) 00:10:59.140 fused_ordering(660) 00:10:59.140 fused_ordering(661) 00:10:59.140 fused_ordering(662) 00:10:59.140 fused_ordering(663) 00:10:59.140 fused_ordering(664) 00:10:59.140 fused_ordering(665) 00:10:59.140 fused_ordering(666) 00:10:59.140 fused_ordering(667) 00:10:59.140 fused_ordering(668) 00:10:59.140 fused_ordering(669) 00:10:59.140 fused_ordering(670) 00:10:59.140 fused_ordering(671) 00:10:59.140 fused_ordering(672) 00:10:59.140 fused_ordering(673) 00:10:59.140 fused_ordering(674) 00:10:59.140 fused_ordering(675) 00:10:59.140 fused_ordering(676) 00:10:59.140 fused_ordering(677) 00:10:59.140 fused_ordering(678) 00:10:59.140 fused_ordering(679) 00:10:59.140 fused_ordering(680) 00:10:59.140 fused_ordering(681) 00:10:59.140 fused_ordering(682) 00:10:59.140 fused_ordering(683) 00:10:59.140 fused_ordering(684) 00:10:59.140 fused_ordering(685) 00:10:59.140 fused_ordering(686) 00:10:59.140 fused_ordering(687) 00:10:59.140 fused_ordering(688) 00:10:59.140 fused_ordering(689) 00:10:59.140 fused_ordering(690) 00:10:59.140 fused_ordering(691) 00:10:59.140 fused_ordering(692) 00:10:59.140 fused_ordering(693) 00:10:59.140 fused_ordering(694) 00:10:59.140 fused_ordering(695) 00:10:59.140 fused_ordering(696) 00:10:59.140 fused_ordering(697) 00:10:59.140 fused_ordering(698) 00:10:59.140 fused_ordering(699) 00:10:59.140 fused_ordering(700) 00:10:59.140 fused_ordering(701) 00:10:59.140 fused_ordering(702) 00:10:59.140 fused_ordering(703) 00:10:59.140 fused_ordering(704) 00:10:59.140 fused_ordering(705) 00:10:59.140 fused_ordering(706) 00:10:59.140 fused_ordering(707) 00:10:59.140 fused_ordering(708) 00:10:59.140 fused_ordering(709) 00:10:59.140 fused_ordering(710) 00:10:59.140 fused_ordering(711) 00:10:59.140 fused_ordering(712) 00:10:59.140 fused_ordering(713) 00:10:59.140 fused_ordering(714) 00:10:59.140 fused_ordering(715) 00:10:59.140 fused_ordering(716) 00:10:59.140 fused_ordering(717) 00:10:59.140 fused_ordering(718) 00:10:59.140 fused_ordering(719) 00:10:59.140 fused_ordering(720) 00:10:59.140 fused_ordering(721) 00:10:59.140 fused_ordering(722) 00:10:59.140 fused_ordering(723) 00:10:59.140 fused_ordering(724) 00:10:59.140 fused_ordering(725) 00:10:59.140 fused_ordering(726) 00:10:59.140 fused_ordering(727) 00:10:59.140 fused_ordering(728) 00:10:59.140 fused_ordering(729) 00:10:59.140 fused_ordering(730) 00:10:59.140 fused_ordering(731) 00:10:59.140 fused_ordering(732) 00:10:59.140 fused_ordering(733) 00:10:59.140 fused_ordering(734) 00:10:59.140 fused_ordering(735) 00:10:59.140 fused_ordering(736) 00:10:59.140 fused_ordering(737) 00:10:59.140 fused_ordering(738) 00:10:59.140 fused_ordering(739) 00:10:59.140 fused_ordering(740) 00:10:59.140 fused_ordering(741) 00:10:59.140 fused_ordering(742) 00:10:59.140 fused_ordering(743) 00:10:59.140 fused_ordering(744) 00:10:59.140 fused_ordering(745) 00:10:59.140 fused_ordering(746) 00:10:59.140 fused_ordering(747) 00:10:59.140 fused_ordering(748) 00:10:59.140 fused_ordering(749) 00:10:59.140 fused_ordering(750) 00:10:59.140 fused_ordering(751) 00:10:59.140 fused_ordering(752) 00:10:59.140 fused_ordering(753) 00:10:59.140 fused_ordering(754) 00:10:59.140 fused_ordering(755) 00:10:59.140 fused_ordering(756) 00:10:59.140 fused_ordering(757) 00:10:59.140 fused_ordering(758) 00:10:59.140 fused_ordering(759) 00:10:59.140 fused_ordering(760) 00:10:59.140 fused_ordering(761) 00:10:59.140 fused_ordering(762) 00:10:59.140 fused_ordering(763) 00:10:59.140 fused_ordering(764) 00:10:59.140 fused_ordering(765) 00:10:59.140 fused_ordering(766) 00:10:59.140 fused_ordering(767) 00:10:59.140 fused_ordering(768) 00:10:59.140 fused_ordering(769) 00:10:59.140 fused_ordering(770) 00:10:59.140 fused_ordering(771) 00:10:59.140 fused_ordering(772) 00:10:59.140 fused_ordering(773) 00:10:59.140 fused_ordering(774) 00:10:59.140 fused_ordering(775) 00:10:59.140 fused_ordering(776) 00:10:59.140 fused_ordering(777) 00:10:59.140 fused_ordering(778) 00:10:59.140 fused_ordering(779) 00:10:59.140 fused_ordering(780) 00:10:59.140 fused_ordering(781) 00:10:59.140 fused_ordering(782) 00:10:59.140 fused_ordering(783) 00:10:59.140 fused_ordering(784) 00:10:59.140 fused_ordering(785) 00:10:59.140 fused_ordering(786) 00:10:59.140 fused_ordering(787) 00:10:59.140 fused_ordering(788) 00:10:59.140 fused_ordering(789) 00:10:59.140 fused_ordering(790) 00:10:59.140 fused_ordering(791) 00:10:59.140 fused_ordering(792) 00:10:59.140 fused_ordering(793) 00:10:59.140 fused_ordering(794) 00:10:59.140 fused_ordering(795) 00:10:59.140 fused_ordering(796) 00:10:59.140 fused_ordering(797) 00:10:59.140 fused_ordering(798) 00:10:59.140 fused_ordering(799) 00:10:59.140 fused_ordering(800) 00:10:59.140 fused_ordering(801) 00:10:59.140 fused_ordering(802) 00:10:59.140 fused_ordering(803) 00:10:59.140 fused_ordering(804) 00:10:59.140 fused_ordering(805) 00:10:59.140 fused_ordering(806) 00:10:59.140 fused_ordering(807) 00:10:59.140 fused_ordering(808) 00:10:59.140 fused_ordering(809) 00:10:59.140 fused_ordering(810) 00:10:59.140 fused_ordering(811) 00:10:59.140 fused_ordering(812) 00:10:59.140 fused_ordering(813) 00:10:59.140 fused_ordering(814) 00:10:59.140 fused_ordering(815) 00:10:59.140 fused_ordering(816) 00:10:59.140 fused_ordering(817) 00:10:59.140 fused_ordering(818) 00:10:59.140 fused_ordering(819) 00:10:59.140 fused_ordering(820) 00:11:00.080 fused_ordering(821) 00:11:00.080 fused_ordering(822) 00:11:00.080 fused_ordering(823) 00:11:00.080 fused_ordering(824) 00:11:00.080 fused_ordering(825) 00:11:00.080 fused_ordering(826) 00:11:00.080 fused_ordering(827) 00:11:00.080 fused_ordering(828) 00:11:00.080 fused_ordering(829) 00:11:00.080 fused_ordering(830) 00:11:00.080 fused_ordering(831) 00:11:00.080 fused_ordering(832) 00:11:00.080 fused_ordering(833) 00:11:00.080 fused_ordering(834) 00:11:00.080 fused_ordering(835) 00:11:00.080 fused_ordering(836) 00:11:00.080 fused_ordering(837) 00:11:00.080 fused_ordering(838) 00:11:00.080 fused_ordering(839) 00:11:00.080 fused_ordering(840) 00:11:00.080 fused_ordering(841) 00:11:00.080 fused_ordering(842) 00:11:00.080 fused_ordering(843) 00:11:00.080 fused_ordering(844) 00:11:00.080 fused_ordering(845) 00:11:00.080 fused_ordering(846) 00:11:00.080 fused_ordering(847) 00:11:00.080 fused_ordering(848) 00:11:00.080 fused_ordering(849) 00:11:00.080 fused_ordering(850) 00:11:00.080 fused_ordering(851) 00:11:00.080 fused_ordering(852) 00:11:00.080 fused_ordering(853) 00:11:00.080 fused_ordering(854) 00:11:00.080 fused_ordering(855) 00:11:00.080 fused_ordering(856) 00:11:00.080 fused_ordering(857) 00:11:00.080 fused_ordering(858) 00:11:00.080 fused_ordering(859) 00:11:00.080 fused_ordering(860) 00:11:00.080 fused_ordering(861) 00:11:00.080 fused_ordering(862) 00:11:00.080 fused_ordering(863) 00:11:00.080 fused_ordering(864) 00:11:00.080 fused_ordering(865) 00:11:00.080 fused_ordering(866) 00:11:00.080 fused_ordering(867) 00:11:00.080 fused_ordering(868) 00:11:00.080 fused_ordering(869) 00:11:00.080 fused_ordering(870) 00:11:00.080 fused_ordering(871) 00:11:00.080 fused_ordering(872) 00:11:00.080 fused_ordering(873) 00:11:00.080 fused_ordering(874) 00:11:00.080 fused_ordering(875) 00:11:00.080 fused_ordering(876) 00:11:00.080 fused_ordering(877) 00:11:00.080 fused_ordering(878) 00:11:00.080 fused_ordering(879) 00:11:00.080 fused_ordering(880) 00:11:00.080 fused_ordering(881) 00:11:00.080 fused_ordering(882) 00:11:00.080 fused_ordering(883) 00:11:00.080 fused_ordering(884) 00:11:00.080 fused_ordering(885) 00:11:00.080 fused_ordering(886) 00:11:00.080 fused_ordering(887) 00:11:00.080 fused_ordering(888) 00:11:00.080 fused_ordering(889) 00:11:00.080 fused_ordering(890) 00:11:00.080 fused_ordering(891) 00:11:00.080 fused_ordering(892) 00:11:00.080 fused_ordering(893) 00:11:00.080 fused_ordering(894) 00:11:00.080 fused_ordering(895) 00:11:00.080 fused_ordering(896) 00:11:00.080 fused_ordering(897) 00:11:00.080 fused_ordering(898) 00:11:00.080 fused_ordering(899) 00:11:00.080 fused_ordering(900) 00:11:00.080 fused_ordering(901) 00:11:00.080 fused_ordering(902) 00:11:00.080 fused_ordering(903) 00:11:00.080 fused_ordering(904) 00:11:00.080 fused_ordering(905) 00:11:00.080 fused_ordering(906) 00:11:00.080 fused_ordering(907) 00:11:00.080 fused_ordering(908) 00:11:00.080 fused_ordering(909) 00:11:00.080 fused_ordering(910) 00:11:00.080 fused_ordering(911) 00:11:00.080 fused_ordering(912) 00:11:00.081 fused_ordering(913) 00:11:00.081 fused_ordering(914) 00:11:00.081 fused_ordering(915) 00:11:00.081 fused_ordering(916) 00:11:00.081 fused_ordering(917) 00:11:00.081 fused_ordering(918) 00:11:00.081 fused_ordering(919) 00:11:00.081 fused_ordering(920) 00:11:00.081 fused_ordering(921) 00:11:00.081 fused_ordering(922) 00:11:00.081 fused_ordering(923) 00:11:00.081 fused_ordering(924) 00:11:00.081 fused_ordering(925) 00:11:00.081 fused_ordering(926) 00:11:00.081 fused_ordering(927) 00:11:00.081 fused_ordering(928) 00:11:00.081 fused_ordering(929) 00:11:00.081 fused_ordering(930) 00:11:00.081 fused_ordering(931) 00:11:00.081 fused_ordering(932) 00:11:00.081 fused_ordering(933) 00:11:00.081 fused_ordering(934) 00:11:00.081 fused_ordering(935) 00:11:00.081 fused_ordering(936) 00:11:00.081 fused_ordering(937) 00:11:00.081 fused_ordering(938) 00:11:00.081 fused_ordering(939) 00:11:00.081 fused_ordering(940) 00:11:00.081 fused_ordering(941) 00:11:00.081 fused_ordering(942) 00:11:00.081 fused_ordering(943) 00:11:00.081 fused_ordering(944) 00:11:00.081 fused_ordering(945) 00:11:00.081 fused_ordering(946) 00:11:00.081 fused_ordering(947) 00:11:00.081 fused_ordering(948) 00:11:00.081 fused_ordering(949) 00:11:00.081 fused_ordering(950) 00:11:00.081 fused_ordering(951) 00:11:00.081 fused_ordering(952) 00:11:00.081 fused_ordering(953) 00:11:00.081 fused_ordering(954) 00:11:00.081 fused_ordering(955) 00:11:00.081 fused_ordering(956) 00:11:00.081 fused_ordering(957) 00:11:00.081 fused_ordering(958) 00:11:00.081 fused_ordering(959) 00:11:00.081 fused_ordering(960) 00:11:00.081 fused_ordering(961) 00:11:00.081 fused_ordering(962) 00:11:00.081 fused_ordering(963) 00:11:00.081 fused_ordering(964) 00:11:00.081 fused_ordering(965) 00:11:00.081 fused_ordering(966) 00:11:00.081 fused_ordering(967) 00:11:00.081 fused_ordering(968) 00:11:00.081 fused_ordering(969) 00:11:00.081 fused_ordering(970) 00:11:00.081 fused_ordering(971) 00:11:00.081 fused_ordering(972) 00:11:00.081 fused_ordering(973) 00:11:00.081 fused_ordering(974) 00:11:00.081 fused_ordering(975) 00:11:00.081 fused_ordering(976) 00:11:00.081 fused_ordering(977) 00:11:00.081 fused_ordering(978) 00:11:00.081 fused_ordering(979) 00:11:00.081 fused_ordering(980) 00:11:00.081 fused_ordering(981) 00:11:00.081 fused_ordering(982) 00:11:00.081 fused_ordering(983) 00:11:00.081 fused_ordering(984) 00:11:00.081 fused_ordering(985) 00:11:00.081 fused_ordering(986) 00:11:00.081 fused_ordering(987) 00:11:00.081 fused_ordering(988) 00:11:00.081 fused_ordering(989) 00:11:00.081 fused_ordering(990) 00:11:00.081 fused_ordering(991) 00:11:00.081 fused_ordering(992) 00:11:00.081 fused_ordering(993) 00:11:00.081 fused_ordering(994) 00:11:00.081 fused_ordering(995) 00:11:00.081 fused_ordering(996) 00:11:00.081 fused_ordering(997) 00:11:00.081 fused_ordering(998) 00:11:00.081 fused_ordering(999) 00:11:00.081 fused_ordering(1000) 00:11:00.081 fused_ordering(1001) 00:11:00.081 fused_ordering(1002) 00:11:00.081 fused_ordering(1003) 00:11:00.081 fused_ordering(1004) 00:11:00.081 fused_ordering(1005) 00:11:00.081 fused_ordering(1006) 00:11:00.081 fused_ordering(1007) 00:11:00.081 fused_ordering(1008) 00:11:00.081 fused_ordering(1009) 00:11:00.081 fused_ordering(1010) 00:11:00.081 fused_ordering(1011) 00:11:00.081 fused_ordering(1012) 00:11:00.081 fused_ordering(1013) 00:11:00.081 fused_ordering(1014) 00:11:00.081 fused_ordering(1015) 00:11:00.081 fused_ordering(1016) 00:11:00.081 fused_ordering(1017) 00:11:00.081 fused_ordering(1018) 00:11:00.081 fused_ordering(1019) 00:11:00.081 fused_ordering(1020) 00:11:00.081 fused_ordering(1021) 00:11:00.081 fused_ordering(1022) 00:11:00.081 fused_ordering(1023) 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:00.081 rmmod nvme_tcp 00:11:00.081 rmmod nvme_fabrics 00:11:00.081 rmmod nvme_keyring 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2535463 ']' 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2535463 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2535463 ']' 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2535463 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2535463 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2535463' 00:11:00.081 killing process with pid 2535463 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2535463 00:11:00.081 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2535463 00:11:00.343 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:00.343 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:00.343 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:00.343 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:00.343 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:00.343 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.343 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.343 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.254 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:02.254 00:11:02.254 real 0m7.453s 00:11:02.254 user 0m5.725s 00:11:02.254 sys 0m2.935s 00:11:02.254 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:02.254 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:02.254 ************************************ 00:11:02.254 END TEST nvmf_fused_ordering 00:11:02.254 ************************************ 00:11:02.512 19:08:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:02.512 19:08:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:02.512 19:08:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.512 19:08:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:02.512 ************************************ 00:11:02.512 START TEST nvmf_ns_masking 00:11:02.512 ************************************ 00:11:02.512 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:02.512 * Looking for test storage... 00:11:02.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.512 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.512 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:02.512 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.512 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.512 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=957d7cde-c18a-4bf0-aba9-18003a6beb72 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=28d4b43a-9038-4f70-acc1-329c266a49fb 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=2f80fb71-9e53-42ee-b9b5-2aee22ba7b27 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:02.513 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:04.476 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:11:04.477 Found 0000:08:00.0 (0x8086 - 0x159b) 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:11:04.477 Found 0000:08:00.1 (0x8086 - 0x159b) 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:11:04.477 Found net devices under 0000:08:00.0: cvl_0_0 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:11:04.477 Found net devices under 0000:08:00.1: cvl_0_1 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:04.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:11:04.477 00:11:04.477 --- 10.0.0.2 ping statistics --- 00:11:04.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.477 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:11:04.477 00:11:04.477 --- 10.0.0.1 ping statistics --- 00:11:04.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.477 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2537280 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2537280 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2537280 ']' 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:04.477 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:04.477 [2024-07-24 19:08:10.225694] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:11:04.477 [2024-07-24 19:08:10.225785] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.477 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.478 [2024-07-24 19:08:10.291737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.478 [2024-07-24 19:08:10.410168] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.478 [2024-07-24 19:08:10.410237] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.478 [2024-07-24 19:08:10.410253] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.478 [2024-07-24 19:08:10.410267] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.478 [2024-07-24 19:08:10.410279] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.478 [2024-07-24 19:08:10.410316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.736 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:04.736 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:11:04.736 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:04.736 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:04.736 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:04.736 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.736 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:04.993 [2024-07-24 19:08:10.816117] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.993 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:04.993 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:04.993 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:05.251 Malloc1 00:11:05.251 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:05.509 Malloc2 00:11:05.509 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:05.768 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:06.335 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.335 [2024-07-24 19:08:12.349470] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.594 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:06.594 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2f80fb71-9e53-42ee-b9b5-2aee22ba7b27 -a 10.0.0.2 -s 4420 -i 4 00:11:06.594 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:06.594 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:06.594 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:06.594 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:06.594 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:08.500 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:08.500 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:08.500 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:08.500 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:08.500 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:08.500 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:08.500 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:08.501 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:08.759 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:08.759 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:08.759 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:08.759 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:08.759 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:08.759 [ 0]:0x1 00:11:08.759 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:08.759 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:08.759 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ed755b96485745e9b61bf2cae4c3667f 00:11:08.759 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ed755b96485745e9b61bf2cae4c3667f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:08.759 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:09.018 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:09.018 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:09.018 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:09.018 [ 0]:0x1 00:11:09.018 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:09.018 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:09.018 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ed755b96485745e9b61bf2cae4c3667f 00:11:09.018 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ed755b96485745e9b61bf2cae4c3667f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:09.018 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:09.018 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:09.018 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:09.018 [ 1]:0x2 00:11:09.018 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:09.018 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:09.018 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54e1f1b02b4347c087382b2840978fc3 00:11:09.018 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54e1f1b02b4347c087382b2840978fc3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:09.018 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:09.018 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:09.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.277 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.843 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:10.104 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:10.104 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2f80fb71-9e53-42ee-b9b5-2aee22ba7b27 -a 10.0.0.2 -s 4420 -i 4 00:11:10.104 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:10.104 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:10.104 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:10.104 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:10.104 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:10.104 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:12.008 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:12.008 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:12.008 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:12.008 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:12.008 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:12.008 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:12.266 [ 0]:0x2 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54e1f1b02b4347c087382b2840978fc3 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54e1f1b02b4347c087382b2840978fc3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.266 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:12.524 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:12.524 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:12.524 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.524 [ 0]:0x1 00:11:12.524 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:12.524 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.524 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ed755b96485745e9b61bf2cae4c3667f 00:11:12.524 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ed755b96485745e9b61bf2cae4c3667f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.524 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:12.524 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.524 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:12.524 [ 1]:0x2 00:11:12.524 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:12.524 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.782 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54e1f1b02b4347c087382b2840978fc3 00:11:12.782 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54e1f1b02b4347c087382b2840978fc3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.782 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:13.041 [ 0]:0x2 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54e1f1b02b4347c087382b2840978fc3 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54e1f1b02b4347c087382b2840978fc3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:13.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.041 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:13.299 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:13.299 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2f80fb71-9e53-42ee-b9b5-2aee22ba7b27 -a 10.0.0.2 -s 4420 -i 4 00:11:13.557 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:13.557 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:13.557 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.557 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:13.557 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:13.557 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:15.456 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:15.456 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:15.456 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:15.456 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:15.456 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:15.456 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:15.456 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:15.456 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:15.713 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:15.713 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:15.713 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:15.713 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:15.713 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:15.713 [ 0]:0x1 00:11:15.713 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:15.713 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:15.713 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ed755b96485745e9b61bf2cae4c3667f 00:11:15.713 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ed755b96485745e9b61bf2cae4c3667f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:15.713 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:15.713 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:15.713 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:15.713 [ 1]:0x2 00:11:15.713 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:15.713 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:15.970 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54e1f1b02b4347c087382b2840978fc3 00:11:15.970 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54e1f1b02b4347c087382b2840978fc3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:15.970 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:16.227 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:16.227 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:16.227 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:16.227 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:16.227 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.227 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:16.227 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.227 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:16.227 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:16.227 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:16.227 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:16.227 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:16.227 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:16.227 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:16.227 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:16.228 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:16.228 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:16.228 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:16.228 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:16.228 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:16.228 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:16.228 [ 0]:0x2 00:11:16.228 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:16.228 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:16.228 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54e1f1b02b4347c087382b2840978fc3 00:11:16.228 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54e1f1b02b4347c087382b2840978fc3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:16.228 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:16.228 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:16.228 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:16.228 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:16.228 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.228 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:16.228 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.228 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:16.228 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.228 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:16.228 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:16.228 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:16.485 [2024-07-24 19:08:22.432634] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:16.485 request: 00:11:16.485 { 00:11:16.485 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:16.485 "nsid": 2, 00:11:16.485 "host": "nqn.2016-06.io.spdk:host1", 00:11:16.485 "method": "nvmf_ns_remove_host", 00:11:16.485 "req_id": 1 00:11:16.485 } 00:11:16.485 Got JSON-RPC error response 00:11:16.485 response: 00:11:16.485 { 00:11:16.485 "code": -32602, 00:11:16.485 "message": "Invalid parameters" 00:11:16.485 } 00:11:16.485 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:16.485 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:16.485 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:16.485 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:16.485 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:16.486 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:16.486 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:16.486 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:16.486 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.486 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:16.486 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.486 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:16.486 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:16.486 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:16.486 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:16.486 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:16.486 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:16.486 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:16.486 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:16.743 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:16.743 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:16.743 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:16.743 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:16.743 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:16.743 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:16.743 [ 0]:0x2 00:11:16.743 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:16.743 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:16.743 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54e1f1b02b4347c087382b2840978fc3 00:11:16.743 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54e1f1b02b4347c087382b2840978fc3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:16.743 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:16.743 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:16.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.743 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2538555 00:11:16.743 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:16.743 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.743 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2538555 /var/tmp/host.sock 00:11:16.743 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2538555 ']' 00:11:16.743 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:11:16.743 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:16.743 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:16.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:16.743 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:16.743 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:16.743 [2024-07-24 19:08:22.645875] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:11:16.743 [2024-07-24 19:08:22.645974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2538555 ] 00:11:16.743 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.743 [2024-07-24 19:08:22.707342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.000 [2024-07-24 19:08:22.824302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.258 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:17.258 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:11:17.258 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.514 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:17.771 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 957d7cde-c18a-4bf0-aba9-18003a6beb72 00:11:17.772 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:17.772 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 957D7CDEC18A4BF0ABA918003A6BEB72 -i 00:11:18.029 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 28d4b43a-9038-4f70-acc1-329c266a49fb 00:11:18.029 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:18.029 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 28D4B43A90384F70ACC1329C266A49FB -i 00:11:18.286 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:18.851 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:18.851 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:18.851 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:19.415 nvme0n1 00:11:19.415 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:19.415 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:19.673 nvme1n2 00:11:19.930 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:19.930 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:19.930 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:19.930 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:19.930 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:20.188 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:20.188 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:20.188 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:20.188 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:20.445 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 957d7cde-c18a-4bf0-aba9-18003a6beb72 == \9\5\7\d\7\c\d\e\-\c\1\8\a\-\4\b\f\0\-\a\b\a\9\-\1\8\0\0\3\a\6\b\e\b\7\2 ]] 00:11:20.445 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:20.445 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:20.445 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:20.704 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 28d4b43a-9038-4f70-acc1-329c266a49fb == \2\8\d\4\b\4\3\a\-\9\0\3\8\-\4\f\7\0\-\a\c\c\1\-\3\2\9\c\2\6\6\a\4\9\f\b ]] 00:11:20.704 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2538555 00:11:20.704 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2538555 ']' 00:11:20.704 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2538555 00:11:20.704 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:11:20.704 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:20.704 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2538555 00:11:20.704 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:20.704 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:20.704 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2538555' 00:11:20.704 killing process with pid 2538555 00:11:20.704 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2538555 00:11:20.704 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2538555 00:11:20.962 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:21.528 rmmod nvme_tcp 00:11:21.528 rmmod nvme_fabrics 00:11:21.528 rmmod nvme_keyring 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2537280 ']' 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2537280 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2537280 ']' 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2537280 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2537280 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2537280' 00:11:21.528 killing process with pid 2537280 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2537280 00:11:21.528 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2537280 00:11:21.789 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:21.789 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:21.789 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:21.789 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:21.789 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:21.789 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.789 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.789 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.695 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:23.695 00:11:23.695 real 0m21.341s 00:11:23.695 user 0m29.122s 00:11:23.695 sys 0m3.898s 00:11:23.695 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.695 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:23.695 ************************************ 00:11:23.695 END TEST nvmf_ns_masking 00:11:23.695 ************************************ 00:11:23.695 19:08:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:11:23.695 19:08:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:23.695 19:08:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:23.695 19:08:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.695 19:08:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:23.695 ************************************ 00:11:23.695 START TEST nvmf_nvme_cli 00:11:23.695 ************************************ 00:11:23.695 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:23.954 * Looking for test storage... 00:11:23.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.954 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:23.955 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:25.861 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:11:25.862 Found 0000:08:00.0 (0x8086 - 0x159b) 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:11:25.862 Found 0000:08:00.1 (0x8086 - 0x159b) 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:11:25.862 Found net devices under 0000:08:00.0: cvl_0_0 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:11:25.862 Found net devices under 0000:08:00.1: cvl_0_1 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:25.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:11:25.862 00:11:25.862 --- 10.0.0.2 ping statistics --- 00:11:25.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.862 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:25.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:11:25.862 00:11:25.862 --- 10.0.0.1 ping statistics --- 00:11:25.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.862 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2540490 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2540490 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 2540490 ']' 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:25.862 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:25.862 [2024-07-24 19:08:31.574258] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:11:25.862 [2024-07-24 19:08:31.574362] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.862 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.862 [2024-07-24 19:08:31.641358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:25.862 [2024-07-24 19:08:31.759713] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.862 [2024-07-24 19:08:31.759779] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.862 [2024-07-24 19:08:31.759794] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.863 [2024-07-24 19:08:31.759808] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.863 [2024-07-24 19:08:31.759819] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.863 [2024-07-24 19:08:31.759918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.863 [2024-07-24 19:08:31.760036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.863 [2024-07-24 19:08:31.760086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.863 [2024-07-24 19:08:31.760090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.122 [2024-07-24 19:08:31.911817] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.122 Malloc0 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.122 Malloc1 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.122 [2024-07-24 19:08:31.989844] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.122 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.122 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.122 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 4420 00:11:26.381 00:11:26.381 Discovery Log Number of Records 2, Generation counter 2 00:11:26.381 =====Discovery Log Entry 0====== 00:11:26.381 trtype: tcp 00:11:26.381 adrfam: ipv4 00:11:26.381 subtype: current discovery subsystem 00:11:26.381 treq: not required 00:11:26.381 portid: 0 00:11:26.381 trsvcid: 4420 00:11:26.381 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:26.381 traddr: 10.0.0.2 00:11:26.381 eflags: explicit discovery connections, duplicate discovery information 00:11:26.381 sectype: none 00:11:26.381 =====Discovery Log Entry 1====== 00:11:26.381 trtype: tcp 00:11:26.381 adrfam: ipv4 00:11:26.381 subtype: nvme subsystem 00:11:26.381 treq: not required 00:11:26.381 portid: 0 00:11:26.381 trsvcid: 4420 00:11:26.381 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:26.381 traddr: 10.0.0.2 00:11:26.381 eflags: none 00:11:26.381 sectype: none 00:11:26.381 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:26.381 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:26.381 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:26.381 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:26.381 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:26.381 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:26.381 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:26.381 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:26.381 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:26.381 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:26.381 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:26.948 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:26.948 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:11:26.948 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:26.948 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:26.948 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:26.948 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:28.848 /dev/nvme0n1 ]] 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:28.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:28.848 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:28.848 rmmod nvme_tcp 00:11:28.848 rmmod nvme_fabrics 00:11:29.107 rmmod nvme_keyring 00:11:29.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:29.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:29.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:29.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2540490 ']' 00:11:29.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2540490 00:11:29.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 2540490 ']' 00:11:29.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 2540490 00:11:29.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:11:29.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:29.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2540490 00:11:29.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:29.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:29.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2540490' 00:11:29.107 killing process with pid 2540490 00:11:29.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 2540490 00:11:29.107 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 2540490 00:11:29.367 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:29.367 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:29.367 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:29.367 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:29.367 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:29.367 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.367 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.367 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.309 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:31.309 00:11:31.309 real 0m7.516s 00:11:31.309 user 0m13.834s 00:11:31.309 sys 0m1.902s 00:11:31.309 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.309 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:31.309 ************************************ 00:11:31.309 END TEST nvmf_nvme_cli 00:11:31.309 ************************************ 00:11:31.309 19:08:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:11:31.309 19:08:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:31.309 19:08:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:31.309 19:08:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.309 19:08:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:31.309 ************************************ 00:11:31.309 START TEST nvmf_vfio_user 00:11:31.309 ************************************ 00:11:31.309 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:31.309 * Looking for test storage... 00:11:31.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.568 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.568 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:31.568 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.568 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.568 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.568 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.568 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.568 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.568 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.568 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.568 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.568 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.568 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:31.568 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:31.568 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.568 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.568 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.568 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.568 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.568 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.568 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.568 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2541141 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2541141' 00:11:31.569 Process pid: 2541141 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2541141 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2541141 ']' 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:31.569 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:31.569 [2024-07-24 19:08:37.397119] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:11:31.569 [2024-07-24 19:08:37.397219] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.569 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.569 [2024-07-24 19:08:37.458179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:31.569 [2024-07-24 19:08:37.575205] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.569 [2024-07-24 19:08:37.575261] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.569 [2024-07-24 19:08:37.575278] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.569 [2024-07-24 19:08:37.575293] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.569 [2024-07-24 19:08:37.575305] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.569 [2024-07-24 19:08:37.575364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.569 [2024-07-24 19:08:37.575422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.569 [2024-07-24 19:08:37.575489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.569 [2024-07-24 19:08:37.575476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:31.828 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:31.828 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:11:31.828 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:32.761 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:33.018 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:33.018 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:33.018 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:33.018 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:33.018 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:33.276 Malloc1 00:11:33.534 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:33.534 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:33.792 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:34.050 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:34.050 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:34.050 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:34.615 Malloc2 00:11:34.615 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:34.873 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:35.131 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:35.393 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:35.393 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:35.393 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:35.393 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:35.393 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:35.393 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:35.393 [2024-07-24 19:08:41.279077] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:11:35.393 [2024-07-24 19:08:41.279130] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541555 ] 00:11:35.393 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.393 [2024-07-24 19:08:41.320540] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:35.393 [2024-07-24 19:08:41.323346] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:35.393 [2024-07-24 19:08:41.323377] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff544641000 00:11:35.393 [2024-07-24 19:08:41.324339] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:35.393 [2024-07-24 19:08:41.325340] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:35.393 [2024-07-24 19:08:41.326345] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:35.393 [2024-07-24 19:08:41.327353] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:35.393 [2024-07-24 19:08:41.328361] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:35.393 [2024-07-24 19:08:41.331491] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:35.393 [2024-07-24 19:08:41.332378] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:35.393 [2024-07-24 19:08:41.333383] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:35.393 [2024-07-24 19:08:41.334394] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:35.393 [2024-07-24 19:08:41.334415] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff544636000 00:11:35.393 [2024-07-24 19:08:41.335865] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:35.393 [2024-07-24 19:08:41.357446] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:35.393 [2024-07-24 19:08:41.357497] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:35.393 [2024-07-24 19:08:41.360546] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:35.393 [2024-07-24 19:08:41.360611] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:35.393 [2024-07-24 19:08:41.360721] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:35.393 [2024-07-24 19:08:41.360755] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:35.393 [2024-07-24 19:08:41.360767] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:35.393 [2024-07-24 19:08:41.361544] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:35.393 [2024-07-24 19:08:41.361572] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:35.393 [2024-07-24 19:08:41.361588] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:35.393 [2024-07-24 19:08:41.362544] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:35.393 [2024-07-24 19:08:41.362565] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:35.393 [2024-07-24 19:08:41.362581] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:35.393 [2024-07-24 19:08:41.363547] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:35.393 [2024-07-24 19:08:41.363567] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:35.393 [2024-07-24 19:08:41.364558] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:35.393 [2024-07-24 19:08:41.364580] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:35.393 [2024-07-24 19:08:41.364591] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:35.393 [2024-07-24 19:08:41.364604] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:35.393 [2024-07-24 19:08:41.364716] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:35.393 [2024-07-24 19:08:41.364730] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:35.393 [2024-07-24 19:08:41.364741] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:35.393 [2024-07-24 19:08:41.365561] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:35.393 [2024-07-24 19:08:41.366569] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:35.393 [2024-07-24 19:08:41.367571] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:35.393 [2024-07-24 19:08:41.368566] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:35.393 [2024-07-24 19:08:41.368670] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:35.394 [2024-07-24 19:08:41.369581] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:35.394 [2024-07-24 19:08:41.369600] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:35.394 [2024-07-24 19:08:41.369610] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:35.394 [2024-07-24 19:08:41.369638] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:35.394 [2024-07-24 19:08:41.369660] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:35.394 [2024-07-24 19:08:41.369689] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:35.394 [2024-07-24 19:08:41.369701] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:35.394 [2024-07-24 19:08:41.369709] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:35.394 [2024-07-24 19:08:41.369731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:35.394 [2024-07-24 19:08:41.369802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:35.394 [2024-07-24 19:08:41.369821] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:35.394 [2024-07-24 19:08:41.369830] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:35.394 [2024-07-24 19:08:41.369839] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:35.394 [2024-07-24 19:08:41.369848] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:35.394 [2024-07-24 19:08:41.369857] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:35.394 [2024-07-24 19:08:41.369867] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:35.394 [2024-07-24 19:08:41.369875] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:35.394 [2024-07-24 19:08:41.369890] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:35.394 [2024-07-24 19:08:41.369917] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:35.394 [2024-07-24 19:08:41.369938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:35.394 [2024-07-24 19:08:41.369963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.394 [2024-07-24 19:08:41.369979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.394 [2024-07-24 19:08:41.369993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.394 [2024-07-24 19:08:41.370006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.394 [2024-07-24 19:08:41.370016] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:35.394 [2024-07-24 19:08:41.370032] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:35.394 [2024-07-24 19:08:41.370049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:35.394 [2024-07-24 19:08:41.370062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:35.394 [2024-07-24 19:08:41.370074] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:35.394 [2024-07-24 19:08:41.370084] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:35.394 [2024-07-24 19:08:41.370102] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:35.394 [2024-07-24 19:08:41.370114] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:35.394 [2024-07-24 19:08:41.370136] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:35.394 [2024-07-24 19:08:41.370149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:35.394 [2024-07-24 19:08:41.370227] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:35.394 [2024-07-24 19:08:41.370245] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:35.394 [2024-07-24 19:08:41.370260] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:35.394 [2024-07-24 19:08:41.370270] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:35.394 [2024-07-24 19:08:41.370277] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:35.394 [2024-07-24 19:08:41.370287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:35.394 [2024-07-24 19:08:41.370304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:35.394 [2024-07-24 19:08:41.370323] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:35.394 [2024-07-24 19:08:41.370342] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:35.394 [2024-07-24 19:08:41.370359] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:35.394 [2024-07-24 19:08:41.370376] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:35.394 [2024-07-24 19:08:41.370387] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:35.394 [2024-07-24 19:08:41.370394] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:35.394 [2024-07-24 19:08:41.370404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:35.394 [2024-07-24 19:08:41.370437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:35.394 [2024-07-24 19:08:41.370462] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:35.394 [2024-07-24 19:08:41.370478] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:35.394 [2024-07-24 19:08:41.370501] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:35.394 [2024-07-24 19:08:41.370511] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:35.394 [2024-07-24 19:08:41.370518] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:35.394 [2024-07-24 19:08:41.370529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:35.394 [2024-07-24 19:08:41.370548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:35.394 [2024-07-24 19:08:41.370564] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:35.395 [2024-07-24 19:08:41.370578] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:35.395 [2024-07-24 19:08:41.370593] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:35.395 [2024-07-24 19:08:41.370609] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:11:35.395 [2024-07-24 19:08:41.370619] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:35.395 [2024-07-24 19:08:41.370629] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:35.395 [2024-07-24 19:08:41.370639] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:35.395 [2024-07-24 19:08:41.370648] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:35.395 [2024-07-24 19:08:41.370657] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:35.395 [2024-07-24 19:08:41.370687] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:35.395 [2024-07-24 19:08:41.370707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:35.395 [2024-07-24 19:08:41.370730] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:35.395 [2024-07-24 19:08:41.370744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:35.395 [2024-07-24 19:08:41.370766] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:35.395 [2024-07-24 19:08:41.370781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:35.395 [2024-07-24 19:08:41.370800] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:35.395 [2024-07-24 19:08:41.370813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:35.395 [2024-07-24 19:08:41.370838] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:35.395 [2024-07-24 19:08:41.370850] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:35.395 [2024-07-24 19:08:41.370857] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:35.395 [2024-07-24 19:08:41.370864] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:35.395 [2024-07-24 19:08:41.370871] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:11:35.395 [2024-07-24 19:08:41.370881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:35.395 [2024-07-24 19:08:41.370895] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:35.395 [2024-07-24 19:08:41.370905] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:35.395 [2024-07-24 19:08:41.370911] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:35.395 [2024-07-24 19:08:41.370922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:35.395 [2024-07-24 19:08:41.370934] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:35.395 [2024-07-24 19:08:41.370944] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:35.395 [2024-07-24 19:08:41.370951] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:35.395 [2024-07-24 19:08:41.370961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:35.395 [2024-07-24 19:08:41.370975] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:35.395 [2024-07-24 19:08:41.370984] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:35.395 [2024-07-24 19:08:41.370991] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:35.395 [2024-07-24 19:08:41.371002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:35.395 [2024-07-24 19:08:41.371015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:35.395 [2024-07-24 19:08:41.371036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:35.395 [2024-07-24 19:08:41.371058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:35.395 [2024-07-24 19:08:41.371071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:35.395 ===================================================== 00:11:35.395 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:35.395 ===================================================== 00:11:35.395 Controller Capabilities/Features 00:11:35.395 ================================ 00:11:35.395 Vendor ID: 4e58 00:11:35.395 Subsystem Vendor ID: 4e58 00:11:35.395 Serial Number: SPDK1 00:11:35.395 Model Number: SPDK bdev Controller 00:11:35.395 Firmware Version: 24.09 00:11:35.395 Recommended Arb Burst: 6 00:11:35.395 IEEE OUI Identifier: 8d 6b 50 00:11:35.395 Multi-path I/O 00:11:35.395 May have multiple subsystem ports: Yes 00:11:35.395 May have multiple controllers: Yes 00:11:35.395 Associated with SR-IOV VF: No 00:11:35.395 Max Data Transfer Size: 131072 00:11:35.395 Max Number of Namespaces: 32 00:11:35.395 Max Number of I/O Queues: 127 00:11:35.395 NVMe Specification Version (VS): 1.3 00:11:35.395 NVMe Specification Version (Identify): 1.3 00:11:35.395 Maximum Queue Entries: 256 00:11:35.395 Contiguous Queues Required: Yes 00:11:35.395 Arbitration Mechanisms Supported 00:11:35.395 Weighted Round Robin: Not Supported 00:11:35.395 Vendor Specific: Not Supported 00:11:35.395 Reset Timeout: 15000 ms 00:11:35.395 Doorbell Stride: 4 bytes 00:11:35.395 NVM Subsystem Reset: Not Supported 00:11:35.395 Command Sets Supported 00:11:35.395 NVM Command Set: Supported 00:11:35.395 Boot Partition: Not Supported 00:11:35.395 Memory Page Size Minimum: 4096 bytes 00:11:35.395 Memory Page Size Maximum: 4096 bytes 00:11:35.395 Persistent Memory Region: Not Supported 00:11:35.395 Optional Asynchronous Events Supported 00:11:35.395 Namespace Attribute Notices: Supported 00:11:35.395 Firmware Activation Notices: Not Supported 00:11:35.395 ANA Change Notices: Not Supported 00:11:35.395 PLE Aggregate Log Change Notices: Not Supported 00:11:35.395 LBA Status Info Alert Notices: Not Supported 00:11:35.395 EGE Aggregate Log Change Notices: Not Supported 00:11:35.395 Normal NVM Subsystem Shutdown event: Not Supported 00:11:35.396 Zone Descriptor Change Notices: Not Supported 00:11:35.396 Discovery Log Change Notices: Not Supported 00:11:35.396 Controller Attributes 00:11:35.396 128-bit Host Identifier: Supported 00:11:35.396 Non-Operational Permissive Mode: Not Supported 00:11:35.396 NVM Sets: Not Supported 00:11:35.396 Read Recovery Levels: Not Supported 00:11:35.396 Endurance Groups: Not Supported 00:11:35.396 Predictable Latency Mode: Not Supported 00:11:35.396 Traffic Based Keep ALive: Not Supported 00:11:35.396 Namespace Granularity: Not Supported 00:11:35.396 SQ Associations: Not Supported 00:11:35.396 UUID List: Not Supported 00:11:35.396 Multi-Domain Subsystem: Not Supported 00:11:35.396 Fixed Capacity Management: Not Supported 00:11:35.396 Variable Capacity Management: Not Supported 00:11:35.396 Delete Endurance Group: Not Supported 00:11:35.396 Delete NVM Set: Not Supported 00:11:35.396 Extended LBA Formats Supported: Not Supported 00:11:35.396 Flexible Data Placement Supported: Not Supported 00:11:35.396 00:11:35.396 Controller Memory Buffer Support 00:11:35.396 ================================ 00:11:35.396 Supported: No 00:11:35.396 00:11:35.396 Persistent Memory Region Support 00:11:35.396 ================================ 00:11:35.396 Supported: No 00:11:35.396 00:11:35.396 Admin Command Set Attributes 00:11:35.396 ============================ 00:11:35.396 Security Send/Receive: Not Supported 00:11:35.396 Format NVM: Not Supported 00:11:35.396 Firmware Activate/Download: Not Supported 00:11:35.396 Namespace Management: Not Supported 00:11:35.396 Device Self-Test: Not Supported 00:11:35.396 Directives: Not Supported 00:11:35.396 NVMe-MI: Not Supported 00:11:35.396 Virtualization Management: Not Supported 00:11:35.396 Doorbell Buffer Config: Not Supported 00:11:35.396 Get LBA Status Capability: Not Supported 00:11:35.396 Command & Feature Lockdown Capability: Not Supported 00:11:35.396 Abort Command Limit: 4 00:11:35.396 Async Event Request Limit: 4 00:11:35.396 Number of Firmware Slots: N/A 00:11:35.396 Firmware Slot 1 Read-Only: N/A 00:11:35.396 Firmware Activation Without Reset: N/A 00:11:35.396 Multiple Update Detection Support: N/A 00:11:35.396 Firmware Update Granularity: No Information Provided 00:11:35.396 Per-Namespace SMART Log: No 00:11:35.396 Asymmetric Namespace Access Log Page: Not Supported 00:11:35.396 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:35.396 Command Effects Log Page: Supported 00:11:35.396 Get Log Page Extended Data: Supported 00:11:35.396 Telemetry Log Pages: Not Supported 00:11:35.396 Persistent Event Log Pages: Not Supported 00:11:35.396 Supported Log Pages Log Page: May Support 00:11:35.396 Commands Supported & Effects Log Page: Not Supported 00:11:35.396 Feature Identifiers & Effects Log Page:May Support 00:11:35.396 NVMe-MI Commands & Effects Log Page: May Support 00:11:35.396 Data Area 4 for Telemetry Log: Not Supported 00:11:35.396 Error Log Page Entries Supported: 128 00:11:35.396 Keep Alive: Supported 00:11:35.396 Keep Alive Granularity: 10000 ms 00:11:35.396 00:11:35.396 NVM Command Set Attributes 00:11:35.396 ========================== 00:11:35.396 Submission Queue Entry Size 00:11:35.396 Max: 64 00:11:35.396 Min: 64 00:11:35.396 Completion Queue Entry Size 00:11:35.396 Max: 16 00:11:35.396 Min: 16 00:11:35.396 Number of Namespaces: 32 00:11:35.396 Compare Command: Supported 00:11:35.396 Write Uncorrectable Command: Not Supported 00:11:35.396 Dataset Management Command: Supported 00:11:35.396 Write Zeroes Command: Supported 00:11:35.396 Set Features Save Field: Not Supported 00:11:35.396 Reservations: Not Supported 00:11:35.396 Timestamp: Not Supported 00:11:35.396 Copy: Supported 00:11:35.396 Volatile Write Cache: Present 00:11:35.396 Atomic Write Unit (Normal): 1 00:11:35.396 Atomic Write Unit (PFail): 1 00:11:35.396 Atomic Compare & Write Unit: 1 00:11:35.396 Fused Compare & Write: Supported 00:11:35.396 Scatter-Gather List 00:11:35.396 SGL Command Set: Supported (Dword aligned) 00:11:35.396 SGL Keyed: Not Supported 00:11:35.396 SGL Bit Bucket Descriptor: Not Supported 00:11:35.396 SGL Metadata Pointer: Not Supported 00:11:35.396 Oversized SGL: Not Supported 00:11:35.396 SGL Metadata Address: Not Supported 00:11:35.396 SGL Offset: Not Supported 00:11:35.396 Transport SGL Data Block: Not Supported 00:11:35.396 Replay Protected Memory Block: Not Supported 00:11:35.396 00:11:35.396 Firmware Slot Information 00:11:35.396 ========================= 00:11:35.396 Active slot: 1 00:11:35.396 Slot 1 Firmware Revision: 24.09 00:11:35.396 00:11:35.396 00:11:35.396 Commands Supported and Effects 00:11:35.396 ============================== 00:11:35.396 Admin Commands 00:11:35.396 -------------- 00:11:35.396 Get Log Page (02h): Supported 00:11:35.396 Identify (06h): Supported 00:11:35.396 Abort (08h): Supported 00:11:35.396 Set Features (09h): Supported 00:11:35.396 Get Features (0Ah): Supported 00:11:35.396 Asynchronous Event Request (0Ch): Supported 00:11:35.396 Keep Alive (18h): Supported 00:11:35.396 I/O Commands 00:11:35.396 ------------ 00:11:35.396 Flush (00h): Supported LBA-Change 00:11:35.396 Write (01h): Supported LBA-Change 00:11:35.396 Read (02h): Supported 00:11:35.396 Compare (05h): Supported 00:11:35.396 Write Zeroes (08h): Supported LBA-Change 00:11:35.397 Dataset Management (09h): Supported LBA-Change 00:11:35.397 Copy (19h): Supported LBA-Change 00:11:35.397 00:11:35.397 Error Log 00:11:35.397 ========= 00:11:35.397 00:11:35.397 Arbitration 00:11:35.397 =========== 00:11:35.397 Arbitration Burst: 1 00:11:35.397 00:11:35.397 Power Management 00:11:35.397 ================ 00:11:35.397 Number of Power States: 1 00:11:35.397 Current Power State: Power State #0 00:11:35.397 Power State #0: 00:11:35.397 Max Power: 0.00 W 00:11:35.397 Non-Operational State: Operational 00:11:35.397 Entry Latency: Not Reported 00:11:35.397 Exit Latency: Not Reported 00:11:35.397 Relative Read Throughput: 0 00:11:35.397 Relative Read Latency: 0 00:11:35.397 Relative Write Throughput: 0 00:11:35.397 Relative Write Latency: 0 00:11:35.397 Idle Power: Not Reported 00:11:35.397 Active Power: Not Reported 00:11:35.397 Non-Operational Permissive Mode: Not Supported 00:11:35.397 00:11:35.397 Health Information 00:11:35.397 ================== 00:11:35.397 Critical Warnings: 00:11:35.397 Available Spare Space: OK 00:11:35.397 Temperature: OK 00:11:35.397 Device Reliability: OK 00:11:35.397 Read Only: No 00:11:35.397 Volatile Memory Backup: OK 00:11:35.397 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:35.397 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:35.397 Available Spare: 0% 00:11:35.397 Available Sp[2024-07-24 19:08:41.371207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:35.397 [2024-07-24 19:08:41.371225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:35.397 [2024-07-24 19:08:41.371271] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:35.397 [2024-07-24 19:08:41.371296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.397 [2024-07-24 19:08:41.371309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.397 [2024-07-24 19:08:41.371321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.397 [2024-07-24 19:08:41.371332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.397 [2024-07-24 19:08:41.374492] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:35.397 [2024-07-24 19:08:41.374517] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:35.397 [2024-07-24 19:08:41.374601] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:35.397 [2024-07-24 19:08:41.374689] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:35.397 [2024-07-24 19:08:41.374703] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:35.397 [2024-07-24 19:08:41.375608] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:35.397 [2024-07-24 19:08:41.375633] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:35.397 [2024-07-24 19:08:41.375709] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:35.397 [2024-07-24 19:08:41.377661] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:35.655 are Threshold: 0% 00:11:35.655 Life Percentage Used: 0% 00:11:35.655 Data Units Read: 0 00:11:35.655 Data Units Written: 0 00:11:35.655 Host Read Commands: 0 00:11:35.655 Host Write Commands: 0 00:11:35.655 Controller Busy Time: 0 minutes 00:11:35.655 Power Cycles: 0 00:11:35.655 Power On Hours: 0 hours 00:11:35.655 Unsafe Shutdowns: 0 00:11:35.655 Unrecoverable Media Errors: 0 00:11:35.655 Lifetime Error Log Entries: 0 00:11:35.655 Warning Temperature Time: 0 minutes 00:11:35.655 Critical Temperature Time: 0 minutes 00:11:35.655 00:11:35.655 Number of Queues 00:11:35.655 ================ 00:11:35.655 Number of I/O Submission Queues: 127 00:11:35.655 Number of I/O Completion Queues: 127 00:11:35.655 00:11:35.655 Active Namespaces 00:11:35.655 ================= 00:11:35.655 Namespace ID:1 00:11:35.655 Error Recovery Timeout: Unlimited 00:11:35.655 Command Set Identifier: NVM (00h) 00:11:35.655 Deallocate: Supported 00:11:35.655 Deallocated/Unwritten Error: Not Supported 00:11:35.655 Deallocated Read Value: Unknown 00:11:35.655 Deallocate in Write Zeroes: Not Supported 00:11:35.655 Deallocated Guard Field: 0xFFFF 00:11:35.655 Flush: Supported 00:11:35.655 Reservation: Supported 00:11:35.655 Namespace Sharing Capabilities: Multiple Controllers 00:11:35.656 Size (in LBAs): 131072 (0GiB) 00:11:35.656 Capacity (in LBAs): 131072 (0GiB) 00:11:35.656 Utilization (in LBAs): 131072 (0GiB) 00:11:35.656 NGUID: 193CCF3387B24516A9889A5D424B2FBC 00:11:35.656 UUID: 193ccf33-87b2-4516-a988-9a5d424b2fbc 00:11:35.656 Thin Provisioning: Not Supported 00:11:35.656 Per-NS Atomic Units: Yes 00:11:35.656 Atomic Boundary Size (Normal): 0 00:11:35.656 Atomic Boundary Size (PFail): 0 00:11:35.656 Atomic Boundary Offset: 0 00:11:35.656 Maximum Single Source Range Length: 65535 00:11:35.656 Maximum Copy Length: 65535 00:11:35.656 Maximum Source Range Count: 1 00:11:35.656 NGUID/EUI64 Never Reused: No 00:11:35.656 Namespace Write Protected: No 00:11:35.656 Number of LBA Formats: 1 00:11:35.656 Current LBA Format: LBA Format #00 00:11:35.656 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:35.656 00:11:35.656 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:35.656 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.656 [2024-07-24 19:08:41.603779] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:40.921 Initializing NVMe Controllers 00:11:40.921 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:40.921 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:40.921 Initialization complete. Launching workers. 00:11:40.921 ======================================================== 00:11:40.921 Latency(us) 00:11:40.921 Device Information : IOPS MiB/s Average min max 00:11:40.921 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 24065.72 94.01 5318.70 1485.92 10556.74 00:11:40.921 ======================================================== 00:11:40.921 Total : 24065.72 94.01 5318.70 1485.92 10556.74 00:11:40.921 00:11:40.921 [2024-07-24 19:08:46.624502] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:40.921 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:40.921 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.921 [2024-07-24 19:08:46.867718] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:46.182 Initializing NVMe Controllers 00:11:46.182 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:46.182 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:46.182 Initialization complete. Launching workers. 00:11:46.182 ======================================================== 00:11:46.182 Latency(us) 00:11:46.182 Device Information : IOPS MiB/s Average min max 00:11:46.182 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15973.65 62.40 8018.25 7658.85 15983.61 00:11:46.182 ======================================================== 00:11:46.182 Total : 15973.65 62.40 8018.25 7658.85 15983.61 00:11:46.182 00:11:46.182 [2024-07-24 19:08:51.911250] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:46.182 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:46.182 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.182 [2024-07-24 19:08:52.142469] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:51.444 [2024-07-24 19:08:57.208740] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:51.444 Initializing NVMe Controllers 00:11:51.444 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:51.445 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:51.445 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:51.445 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:51.445 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:51.445 Initialization complete. Launching workers. 00:11:51.445 Starting thread on core 2 00:11:51.445 Starting thread on core 3 00:11:51.445 Starting thread on core 1 00:11:51.445 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:51.445 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.704 [2024-07-24 19:08:57.513249] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:54.995 [2024-07-24 19:09:00.602154] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:54.995 Initializing NVMe Controllers 00:11:54.995 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:54.995 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:54.995 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:54.995 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:54.995 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:54.995 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:54.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:54.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:54.996 Initialization complete. Launching workers. 00:11:54.996 Starting thread on core 1 with urgent priority queue 00:11:54.996 Starting thread on core 2 with urgent priority queue 00:11:54.996 Starting thread on core 3 with urgent priority queue 00:11:54.996 Starting thread on core 0 with urgent priority queue 00:11:54.996 SPDK bdev Controller (SPDK1 ) core 0: 8215.33 IO/s 12.17 secs/100000 ios 00:11:54.996 SPDK bdev Controller (SPDK1 ) core 1: 6629.33 IO/s 15.08 secs/100000 ios 00:11:54.996 SPDK bdev Controller (SPDK1 ) core 2: 7946.67 IO/s 12.58 secs/100000 ios 00:11:54.996 SPDK bdev Controller (SPDK1 ) core 3: 6741.33 IO/s 14.83 secs/100000 ios 00:11:54.996 ======================================================== 00:11:54.996 00:11:54.996 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:54.996 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.996 [2024-07-24 19:09:00.886041] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:54.996 Initializing NVMe Controllers 00:11:54.996 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:54.996 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:54.996 Namespace ID: 1 size: 0GB 00:11:54.996 Initialization complete. 00:11:54.996 INFO: using host memory buffer for IO 00:11:54.996 Hello world! 00:11:54.996 [2024-07-24 19:09:00.919765] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:54.996 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:55.252 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.252 [2024-07-24 19:09:01.203927] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:56.626 Initializing NVMe Controllers 00:11:56.626 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:56.626 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:56.626 Initialization complete. Launching workers. 00:11:56.626 submit (in ns) avg, min, max = 8605.4, 4491.9, 4026320.0 00:11:56.626 complete (in ns) avg, min, max = 31423.4, 2654.8, 6010564.4 00:11:56.626 00:11:56.626 Submit histogram 00:11:56.626 ================ 00:11:56.626 Range in us Cumulative Count 00:11:56.626 4.480 - 4.504: 0.0086% ( 1) 00:11:56.626 4.504 - 4.527: 0.7414% ( 85) 00:11:56.626 4.527 - 4.551: 3.5690% ( 328) 00:11:56.626 4.551 - 4.575: 9.5259% ( 691) 00:11:56.626 4.575 - 4.599: 16.6552% ( 827) 00:11:56.626 4.599 - 4.622: 23.2845% ( 769) 00:11:56.626 4.622 - 4.646: 26.2155% ( 340) 00:11:56.626 4.646 - 4.670: 27.5690% ( 157) 00:11:56.626 4.670 - 4.693: 28.3448% ( 90) 00:11:56.626 4.693 - 4.717: 30.2500% ( 221) 00:11:56.626 4.717 - 4.741: 34.6552% ( 511) 00:11:56.626 4.741 - 4.764: 42.1983% ( 875) 00:11:56.626 4.764 - 4.788: 50.1466% ( 922) 00:11:56.626 4.788 - 4.812: 56.3621% ( 721) 00:11:56.626 4.812 - 4.836: 59.1293% ( 321) 00:11:56.626 4.836 - 4.859: 59.8534% ( 84) 00:11:56.626 4.859 - 4.883: 60.4741% ( 72) 00:11:56.626 4.883 - 4.907: 61.2414% ( 89) 00:11:56.626 4.907 - 4.930: 62.4310% ( 138) 00:11:56.626 4.930 - 4.954: 64.3448% ( 222) 00:11:56.626 4.954 - 4.978: 65.8966% ( 180) 00:11:56.626 4.978 - 5.001: 67.4397% ( 179) 00:11:56.626 5.001 - 5.025: 68.5776% ( 132) 00:11:56.626 5.025 - 5.049: 69.2155% ( 74) 00:11:56.626 5.049 - 5.073: 69.5086% ( 34) 00:11:56.626 5.073 - 5.096: 69.6293% ( 14) 00:11:56.626 5.096 - 5.120: 69.7931% ( 19) 00:11:56.626 5.120 - 5.144: 70.3276% ( 62) 00:11:56.626 5.144 - 5.167: 71.7845% ( 169) 00:11:56.626 5.167 - 5.191: 74.4310% ( 307) 00:11:56.626 5.191 - 5.215: 77.0862% ( 308) 00:11:56.626 5.215 - 5.239: 79.4483% ( 274) 00:11:56.626 5.239 - 5.262: 80.3707% ( 107) 00:11:56.626 5.262 - 5.286: 80.7500% ( 44) 00:11:56.627 5.286 - 5.310: 81.3448% ( 69) 00:11:56.627 5.310 - 5.333: 82.5259% ( 137) 00:11:56.627 5.333 - 5.357: 83.9138% ( 161) 00:11:56.627 5.357 - 5.381: 85.6379% ( 200) 00:11:56.627 5.381 - 5.404: 86.4138% ( 90) 00:11:56.627 5.404 - 5.428: 86.9914% ( 67) 00:11:56.627 5.428 - 5.452: 87.4483% ( 53) 00:11:56.627 5.452 - 5.476: 87.9397% ( 57) 00:11:56.627 5.476 - 5.499: 88.0603% ( 14) 00:11:56.627 5.499 - 5.523: 88.1552% ( 11) 00:11:56.627 5.523 - 5.547: 88.6983% ( 63) 00:11:56.627 5.547 - 5.570: 90.6121% ( 222) 00:11:56.627 5.570 - 5.594: 92.9741% ( 274) 00:11:56.627 5.594 - 5.618: 95.4224% ( 284) 00:11:56.627 5.618 - 5.641: 96.3707% ( 110) 00:11:56.627 5.641 - 5.665: 96.6466% ( 32) 00:11:56.627 5.665 - 5.689: 96.7672% ( 14) 00:11:56.627 5.689 - 5.713: 96.8621% ( 11) 00:11:56.627 5.713 - 5.736: 96.9397% ( 9) 00:11:56.627 5.736 - 5.760: 96.9914% ( 6) 00:11:56.627 5.760 - 5.784: 97.0431% ( 6) 00:11:56.627 5.784 - 5.807: 97.1207% ( 9) 00:11:56.627 5.807 - 5.831: 97.2500% ( 15) 00:11:56.627 5.831 - 5.855: 97.5000% ( 29) 00:11:56.627 5.855 - 5.879: 97.5776% ( 9) 00:11:56.627 5.879 - 5.902: 97.6379% ( 7) 00:11:56.627 5.902 - 5.926: 97.6897% ( 6) 00:11:56.627 5.926 - 5.950: 97.7586% ( 8) 00:11:56.627 5.950 - 5.973: 97.8190% ( 7) 00:11:56.627 5.973 - 5.997: 97.8276% ( 1) 00:11:56.627 5.997 - 6.021: 97.8534% ( 3) 00:11:56.627 6.021 - 6.044: 97.8707% ( 2) 00:11:56.627 6.044 - 6.068: 97.9052% ( 4) 00:11:56.627 6.068 - 6.116: 97.9828% ( 9) 00:11:56.627 6.116 - 6.163: 98.0517% ( 8) 00:11:56.627 6.163 - 6.210: 98.1121% ( 7) 00:11:56.627 6.210 - 6.258: 98.1724% ( 7) 00:11:56.627 6.258 - 6.305: 98.2241% ( 6) 00:11:56.627 6.305 - 6.353: 98.2328% ( 1) 00:11:56.627 6.353 - 6.400: 98.2586% ( 3) 00:11:56.627 6.400 - 6.447: 98.2759% ( 2) 00:11:56.627 6.447 - 6.495: 98.3017% ( 3) 00:11:56.627 6.495 - 6.542: 98.3362% ( 4) 00:11:56.627 6.542 - 6.590: 98.3534% ( 2) 00:11:56.627 6.590 - 6.637: 98.3621% ( 1) 00:11:56.627 6.637 - 6.684: 98.3793% ( 2) 00:11:56.627 6.684 - 6.732: 98.3966% ( 2) 00:11:56.627 6.732 - 6.779: 98.4138% ( 2) 00:11:56.627 6.779 - 6.827: 98.4310% ( 2) 00:11:56.627 6.827 - 6.874: 98.4741% ( 5) 00:11:56.627 6.874 - 6.921: 98.5603% ( 10) 00:11:56.627 6.921 - 6.969: 98.6552% ( 11) 00:11:56.627 6.969 - 7.016: 98.6897% ( 4) 00:11:56.627 7.159 - 7.206: 98.6983% ( 1) 00:11:56.627 7.206 - 7.253: 98.7069% ( 1) 00:11:56.627 7.301 - 7.348: 98.7155% ( 1) 00:11:56.627 7.538 - 7.585: 98.7414% ( 3) 00:11:56.627 7.633 - 7.680: 98.7586% ( 2) 00:11:56.627 7.870 - 7.917: 98.7672% ( 1) 00:11:56.627 7.964 - 8.012: 98.8017% ( 4) 00:11:56.627 8.154 - 8.201: 98.8190% ( 2) 00:11:56.627 8.201 - 8.249: 98.8448% ( 3) 00:11:56.627 8.249 - 8.296: 98.8534% ( 1) 00:11:56.627 8.391 - 8.439: 98.8621% ( 1) 00:11:56.627 8.486 - 8.533: 98.8707% ( 1) 00:11:56.627 8.533 - 8.581: 98.8793% ( 1) 00:11:56.627 8.581 - 8.628: 98.8879% ( 1) 00:11:56.627 8.676 - 8.723: 98.9052% ( 2) 00:11:56.627 8.723 - 8.770: 98.9138% ( 1) 00:11:56.627 8.770 - 8.818: 98.9310% ( 2) 00:11:56.627 8.865 - 8.913: 98.9483% ( 2) 00:11:56.627 8.913 - 8.960: 98.9569% ( 1) 00:11:56.627 9.007 - 9.055: 98.9655% ( 1) 00:11:56.627 9.055 - 9.102: 98.9828% ( 2) 00:11:56.627 9.102 - 9.150: 98.9914% ( 1) 00:11:56.627 9.150 - 9.197: 99.0172% ( 3) 00:11:56.627 9.197 - 9.244: 99.0259% ( 1) 00:11:56.627 9.292 - 9.339: 99.0345% ( 1) 00:11:56.627 9.339 - 9.387: 99.0517% ( 2) 00:11:56.627 9.387 - 9.434: 99.0690% ( 2) 00:11:56.627 9.481 - 9.529: 99.0776% ( 1) 00:11:56.627 9.576 - 9.624: 99.0948% ( 2) 00:11:56.627 9.671 - 9.719: 99.1121% ( 2) 00:11:56.627 9.719 - 9.766: 99.1207% ( 1) 00:11:56.627 9.813 - 9.861: 99.1379% ( 2) 00:11:56.627 9.861 - 9.908: 99.1466% ( 1) 00:11:56.627 9.908 - 9.956: 99.1638% ( 2) 00:11:56.627 9.956 - 10.003: 99.1724% ( 1) 00:11:56.627 10.003 - 10.050: 99.1983% ( 3) 00:11:56.627 10.050 - 10.098: 99.2328% ( 4) 00:11:56.627 10.098 - 10.145: 99.2500% ( 2) 00:11:56.627 10.193 - 10.240: 99.2672% ( 2) 00:11:56.627 10.240 - 10.287: 99.2845% ( 2) 00:11:56.627 10.335 - 10.382: 99.2931% ( 1) 00:11:56.627 10.382 - 10.430: 99.3017% ( 1) 00:11:56.627 10.430 - 10.477: 99.3103% ( 1) 00:11:56.627 10.524 - 10.572: 99.3190% ( 1) 00:11:56.627 10.572 - 10.619: 99.3448% ( 3) 00:11:56.627 10.667 - 10.714: 99.3621% ( 2) 00:11:56.627 10.809 - 10.856: 99.3793% ( 2) 00:11:56.627 10.904 - 10.951: 99.3879% ( 1) 00:11:56.627 10.951 - 10.999: 99.3966% ( 1) 00:11:56.627 10.999 - 11.046: 99.4138% ( 2) 00:11:56.627 11.046 - 11.093: 99.4224% ( 1) 00:11:56.627 11.141 - 11.188: 99.4310% ( 1) 00:11:56.627 11.188 - 11.236: 99.4397% ( 1) 00:11:56.627 11.236 - 11.283: 99.4483% ( 1) 00:11:56.627 11.283 - 11.330: 99.4569% ( 1) 00:11:56.627 11.378 - 11.425: 99.4655% ( 1) 00:11:56.627 11.425 - 11.473: 99.4741% ( 1) 00:11:56.627 11.520 - 11.567: 99.4914% ( 2) 00:11:56.627 11.662 - 11.710: 99.5000% ( 1) 00:11:56.627 11.757 - 11.804: 99.5086% ( 1) 00:11:56.627 11.804 - 11.852: 99.5259% ( 2) 00:11:56.627 11.852 - 11.899: 99.5345% ( 1) 00:11:56.627 11.899 - 11.947: 99.5517% ( 2) 00:11:56.627 11.947 - 11.994: 99.5603% ( 1) 00:11:56.627 12.231 - 12.326: 99.5862% ( 3) 00:11:56.627 12.326 - 12.421: 99.6034% ( 2) 00:11:56.627 12.421 - 12.516: 99.6121% ( 1) 00:11:56.627 12.516 - 12.610: 99.6207% ( 1) 00:11:56.627 12.610 - 12.705: 99.6293% ( 1) 00:11:56.627 12.705 - 12.800: 99.6466% ( 2) 00:11:56.627 12.800 - 12.895: 99.6552% ( 1) 00:11:56.627 12.895 - 12.990: 99.6638% ( 1) 00:11:56.627 12.990 - 13.084: 99.6724% ( 1) 00:11:56.627 13.084 - 13.179: 99.6810% ( 1) 00:11:56.627 13.179 - 13.274: 99.6897% ( 1) 00:11:56.627 13.274 - 13.369: 99.6983% ( 1) 00:11:56.627 13.369 - 13.464: 99.7069% ( 1) 00:11:56.627 13.464 - 13.559: 99.7241% ( 2) 00:11:56.627 13.559 - 13.653: 99.7414% ( 2) 00:11:56.627 13.653 - 13.748: 99.7672% ( 3) 00:11:56.627 13.748 - 13.843: 99.7845% ( 2) 00:11:56.627 13.843 - 13.938: 99.8017% ( 2) 00:11:56.627 14.033 - 14.127: 99.8103% ( 1) 00:11:56.627 14.127 - 14.222: 99.8190% ( 1) 00:11:56.627 14.222 - 14.317: 99.8276% ( 1) 00:11:56.627 14.601 - 14.696: 99.8362% ( 1) 00:11:56.627 14.791 - 14.886: 99.8448% ( 1) 00:11:56.627 16.782 - 16.877: 99.8534% ( 1) 00:11:56.627 17.161 - 17.256: 99.8621% ( 1) 00:11:56.627 17.920 - 18.015: 99.8707% ( 1) 00:11:56.627 18.773 - 18.868: 99.8793% ( 1) 00:11:56.627 18.868 - 18.963: 99.8879% ( 1) 00:11:56.627 19.342 - 19.437: 99.8966% ( 1) 00:11:56.627 24.178 - 24.273: 99.9052% ( 1) 00:11:56.627 1747.627 - 1759.763: 99.9138% ( 1) 00:11:56.627 3980.705 - 4004.978: 99.9483% ( 4) 00:11:56.627 4004.978 - 4029.250: 100.0000% ( 6) 00:11:56.627 00:11:56.627 Complete histogram 00:11:56.627 ================== 00:11:56.627 Range in us Cumulative Count 00:11:56.627 2.655 - 2.667: 2.0603% ( 239) 00:11:56.627 2.667 - 2.679: 27.6983% ( 2974) 00:11:56.627 2.679 - 2.690: 51.7241% ( 2787) 00:11:56.627 2.690 - 2.702: 55.9914% ( 495) 00:11:56.627 2.702 - 2.714: 68.4914% ( 1450) 00:11:56.627 2.714 - 2.726: 83.8707% ( 1784) 00:11:56.627 2.726 - 2.738: 90.1207% ( 725) 00:11:56.627 2.738 - 2.750: 94.3017% ( 485) 00:11:56.627 2.750 - 2.761: 96.2672% ( 228) 00:11:56.627 2.761 - 2.773: 97.2672% ( 116) 00:11:56.627 2.773 - 2.785: 97.6379% ( 43) 00:11:56.627 2.785 - 2.797: 97.7500% ( 13) 00:11:56.627 2.797 - 2.809: 97.8448% ( 11) 00:11:56.627 2.809 - 2.821: 97.8879% ( 5) 00:11:56.627 2.821 - 2.833: 97.9397% ( 6) 00:11:56.627 2.833 - 2.844: 97.9741% ( 4) 00:11:56.627 2.844 - 2.856: 97.9828% ( 1) 00:11:56.627 2.868 - 2.880: 97.9914% ( 1) 00:11:56.627 2.892 - 2.904: 98.0000% ( 1) 00:11:56.628 2.904 - 2.916: 98.0172% ( 2) 00:11:56.628 2.916 - 2.927: 98.0259% ( 1) 00:11:56.628 2.939 - 2.951: 98.0431% ( 2) 00:11:56.628 2.951 - 2.963: 98.0690% ( 3) 00:11:56.628 2.963 - 2.975: 98.0862% ( 2) 00:11:56.628 2.987 - 2.999: 98.0948% ( 1) 00:11:56.628 2.999 - 3.010: 98.1034% ( 1) 00:11:56.628 3.034 - 3.058: 98.1379% ( 4) 00:11:56.628 3.105 - 3.129: 98.1638% ( 3) 00:11:56.628 3.129 - 3.153: 98.1897% ( 3) 00:11:56.628 3.153 - 3.176: 98.2155% ( 3) 00:11:56.628 3.176 - 3.200: 98.2500% ( 4) 00:11:56.628 3.200 - 3.224: 98.3103% ( 7) 00:11:56.628 3.224 - 3.247: 98.3534% ( 5) 00:11:56.628 3.247 - 3.271: 98.4310% ( 9) 00:11:56.628 3.271 - 3.295: 98.4569% ( 3) 00:11:56.628 3.295 - 3.319: 98.5431% ( 10) 00:11:56.628 3.319 - 3.342: 98.5776% ( 4) 00:11:56.628 3.342 - 3.366: 98.6207% ( 5) 00:11:56.628 3.366 - 3.390: 98.6897% ( 8) 00:11:56.628 3.390 - 3.413: 98.7414% ( 6) 00:11:56.628 3.413 - 3.437: 98.7672% ( 3) 00:11:56.628 3.437 - 3.461: 98.8103% ( 5) 00:11:56.628 3.461 - 3.484: 98.8448% ( 4) 00:11:56.628 3.484 - 3.508: 98.8793% ( 4) 00:11:56.628 3.508 - 3.532: 98.9052% ( 3) 00:11:56.628 3.532 - 3.556: 98.9310% ( 3) 00:11:56.628 3.556 - 3.579: 98.9483% ( 2) 00:11:56.628 3.579 - 3.603: 98.9569% ( 1) 00:11:56.628 3.603 - 3.627: 98.9655% ( 1) 00:11:56.628 3.627 - 3.650: 98.9741% ( 1) 00:11:56.628 3.650 - 3.674: 98.9828% ( 1) 00:11:56.628 3.674 - 3.698: 98.9914% ( 1) 00:11:56.628 3.698 - 3.721: 99.0000% ( 1) 00:11:56.628 3.745 - 3.769: 99.0259% ( 3) 00:11:56.628 3.816 - 3.840: 99.0345% ( 1) 00:11:56.628 3.840 - 3.864: 99.0431% ( 1) 00:11:56.628 3.887 - 3.911: 99.0517% ( 1) 00:11:56.628 3.911 - 3.935: 99.0603% ( 1) 00:11:56.628 3.959 - 3.982: 99.0690% ( 1) 00:11:56.628 4.433 - 4.456: 99.0776% ( 1) 00:11:56.628 4.622 - 4.646: 99.0862% ( 1) 00:11:56.628 5.523 - 5.547: 99.0948% ( 1) 00:11:56.628 5.902 - 5.926: 99.1034% ( 1) 00:11:56.628 6.637 - 6.684: 99.1121% ( 1) 00:11:56.628 7.301 - 7.348: 99.1207% ( 1) 00:11:56.628 7.490 - 7.538: 99.1293% ( 1) 00:11:56.628 7.775 - 7.822: 99.1379% ( 1) 00:11:56.628 7.870 - 7.917: 9[2024-07-24 19:09:02.225353] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:56.628 9.1466% ( 1) 00:11:56.628 7.917 - 7.964: 99.1552% ( 1) 00:11:56.628 8.249 - 8.296: 99.1638% ( 1) 00:11:56.628 8.439 - 8.486: 99.1810% ( 2) 00:11:56.628 8.533 - 8.581: 99.1897% ( 1) 00:11:56.628 9.387 - 9.434: 99.1983% ( 1) 00:11:56.628 9.481 - 9.529: 99.2069% ( 1) 00:11:56.628 9.956 - 10.003: 99.2155% ( 1) 00:11:56.628 10.287 - 10.335: 99.2241% ( 1) 00:11:56.628 10.430 - 10.477: 99.2328% ( 1) 00:11:56.628 11.425 - 11.473: 99.2414% ( 1) 00:11:56.628 12.421 - 12.516: 99.2500% ( 1) 00:11:56.628 14.033 - 14.127: 99.2586% ( 1) 00:11:56.628 15.076 - 15.170: 99.2672% ( 1) 00:11:56.628 15.644 - 15.739: 99.2759% ( 1) 00:11:56.628 18.773 - 18.868: 99.2845% ( 1) 00:11:56.628 2002.489 - 2014.625: 99.2931% ( 1) 00:11:56.628 2123.852 - 2135.988: 99.3017% ( 1) 00:11:56.628 2560.759 - 2572.895: 99.3103% ( 1) 00:11:56.628 3252.527 - 3276.800: 99.3190% ( 1) 00:11:56.628 3980.705 - 4004.978: 99.7845% ( 54) 00:11:56.628 4004.978 - 4029.250: 99.9655% ( 21) 00:11:56.628 4975.881 - 5000.154: 99.9741% ( 1) 00:11:56.628 5995.330 - 6019.603: 100.0000% ( 3) 00:11:56.628 00:11:56.628 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:56.628 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:56.628 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:56.628 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:56.628 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:56.628 [ 00:11:56.628 { 00:11:56.628 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:56.628 "subtype": "Discovery", 00:11:56.628 "listen_addresses": [], 00:11:56.628 "allow_any_host": true, 00:11:56.628 "hosts": [] 00:11:56.628 }, 00:11:56.628 { 00:11:56.628 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:56.628 "subtype": "NVMe", 00:11:56.628 "listen_addresses": [ 00:11:56.628 { 00:11:56.628 "trtype": "VFIOUSER", 00:11:56.628 "adrfam": "IPv4", 00:11:56.628 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:56.628 "trsvcid": "0" 00:11:56.628 } 00:11:56.628 ], 00:11:56.628 "allow_any_host": true, 00:11:56.628 "hosts": [], 00:11:56.628 "serial_number": "SPDK1", 00:11:56.628 "model_number": "SPDK bdev Controller", 00:11:56.628 "max_namespaces": 32, 00:11:56.628 "min_cntlid": 1, 00:11:56.628 "max_cntlid": 65519, 00:11:56.628 "namespaces": [ 00:11:56.628 { 00:11:56.628 "nsid": 1, 00:11:56.628 "bdev_name": "Malloc1", 00:11:56.628 "name": "Malloc1", 00:11:56.628 "nguid": "193CCF3387B24516A9889A5D424B2FBC", 00:11:56.628 "uuid": "193ccf33-87b2-4516-a988-9a5d424b2fbc" 00:11:56.628 } 00:11:56.628 ] 00:11:56.628 }, 00:11:56.628 { 00:11:56.628 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:56.629 "subtype": "NVMe", 00:11:56.629 "listen_addresses": [ 00:11:56.629 { 00:11:56.629 "trtype": "VFIOUSER", 00:11:56.629 "adrfam": "IPv4", 00:11:56.629 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:56.629 "trsvcid": "0" 00:11:56.629 } 00:11:56.629 ], 00:11:56.629 "allow_any_host": true, 00:11:56.629 "hosts": [], 00:11:56.629 "serial_number": "SPDK2", 00:11:56.629 "model_number": "SPDK bdev Controller", 00:11:56.629 "max_namespaces": 32, 00:11:56.629 "min_cntlid": 1, 00:11:56.629 "max_cntlid": 65519, 00:11:56.629 "namespaces": [ 00:11:56.629 { 00:11:56.629 "nsid": 1, 00:11:56.629 "bdev_name": "Malloc2", 00:11:56.629 "name": "Malloc2", 00:11:56.629 "nguid": "33F6B12AEE14462F99EEC4BE5F8B3EE9", 00:11:56.629 "uuid": "33f6b12a-ee14-462f-99ee-c4be5f8b3ee9" 00:11:56.629 } 00:11:56.629 ] 00:11:56.629 } 00:11:56.629 ] 00:11:56.629 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:56.629 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2543585 00:11:56.629 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:56.629 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:56.629 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:11:56.629 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:56.629 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:56.629 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:11:56.629 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:56.629 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:56.887 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.887 [2024-07-24 19:09:02.758024] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:57.144 Malloc3 00:11:57.144 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:57.402 [2024-07-24 19:09:03.211380] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:57.402 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:57.402 Asynchronous Event Request test 00:11:57.402 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:57.402 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:57.402 Registering asynchronous event callbacks... 00:11:57.402 Starting namespace attribute notice tests for all controllers... 00:11:57.402 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:57.402 aer_cb - Changed Namespace 00:11:57.402 Cleaning up... 00:11:57.660 [ 00:11:57.660 { 00:11:57.660 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:57.660 "subtype": "Discovery", 00:11:57.660 "listen_addresses": [], 00:11:57.660 "allow_any_host": true, 00:11:57.661 "hosts": [] 00:11:57.661 }, 00:11:57.661 { 00:11:57.661 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:57.661 "subtype": "NVMe", 00:11:57.661 "listen_addresses": [ 00:11:57.661 { 00:11:57.661 "trtype": "VFIOUSER", 00:11:57.661 "adrfam": "IPv4", 00:11:57.661 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:57.661 "trsvcid": "0" 00:11:57.661 } 00:11:57.661 ], 00:11:57.661 "allow_any_host": true, 00:11:57.661 "hosts": [], 00:11:57.661 "serial_number": "SPDK1", 00:11:57.661 "model_number": "SPDK bdev Controller", 00:11:57.661 "max_namespaces": 32, 00:11:57.661 "min_cntlid": 1, 00:11:57.661 "max_cntlid": 65519, 00:11:57.661 "namespaces": [ 00:11:57.661 { 00:11:57.661 "nsid": 1, 00:11:57.661 "bdev_name": "Malloc1", 00:11:57.661 "name": "Malloc1", 00:11:57.661 "nguid": "193CCF3387B24516A9889A5D424B2FBC", 00:11:57.661 "uuid": "193ccf33-87b2-4516-a988-9a5d424b2fbc" 00:11:57.661 }, 00:11:57.661 { 00:11:57.661 "nsid": 2, 00:11:57.661 "bdev_name": "Malloc3", 00:11:57.661 "name": "Malloc3", 00:11:57.661 "nguid": "D354F842B77B448A846A8FCCAB0562D8", 00:11:57.661 "uuid": "d354f842-b77b-448a-846a-8fccab0562d8" 00:11:57.661 } 00:11:57.661 ] 00:11:57.661 }, 00:11:57.661 { 00:11:57.661 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:57.661 "subtype": "NVMe", 00:11:57.661 "listen_addresses": [ 00:11:57.661 { 00:11:57.661 "trtype": "VFIOUSER", 00:11:57.661 "adrfam": "IPv4", 00:11:57.661 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:57.661 "trsvcid": "0" 00:11:57.661 } 00:11:57.661 ], 00:11:57.661 "allow_any_host": true, 00:11:57.661 "hosts": [], 00:11:57.661 "serial_number": "SPDK2", 00:11:57.661 "model_number": "SPDK bdev Controller", 00:11:57.661 "max_namespaces": 32, 00:11:57.661 "min_cntlid": 1, 00:11:57.661 "max_cntlid": 65519, 00:11:57.661 "namespaces": [ 00:11:57.661 { 00:11:57.661 "nsid": 1, 00:11:57.661 "bdev_name": "Malloc2", 00:11:57.661 "name": "Malloc2", 00:11:57.661 "nguid": "33F6B12AEE14462F99EEC4BE5F8B3EE9", 00:11:57.661 "uuid": "33f6b12a-ee14-462f-99ee-c4be5f8b3ee9" 00:11:57.661 } 00:11:57.661 ] 00:11:57.661 } 00:11:57.661 ] 00:11:57.661 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2543585 00:11:57.661 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:57.661 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:57.661 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:57.661 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:57.661 [2024-07-24 19:09:03.547185] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:11:57.661 [2024-07-24 19:09:03.547238] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2543685 ] 00:11:57.661 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.661 [2024-07-24 19:09:03.590310] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:57.661 [2024-07-24 19:09:03.592777] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:57.661 [2024-07-24 19:09:03.592809] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc9a7b48000 00:11:57.661 [2024-07-24 19:09:03.593780] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:57.661 [2024-07-24 19:09:03.594784] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:57.661 [2024-07-24 19:09:03.595787] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:57.661 [2024-07-24 19:09:03.596800] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:57.661 [2024-07-24 19:09:03.597804] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:57.661 [2024-07-24 19:09:03.598810] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:57.661 [2024-07-24 19:09:03.599812] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:57.661 [2024-07-24 19:09:03.600823] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:57.661 [2024-07-24 19:09:03.601847] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:57.661 [2024-07-24 19:09:03.601872] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc9a7b3d000 00:11:57.661 [2024-07-24 19:09:03.603317] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:57.661 [2024-07-24 19:09:03.623324] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:57.661 [2024-07-24 19:09:03.623363] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:57.661 [2024-07-24 19:09:03.625460] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:57.661 [2024-07-24 19:09:03.625528] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:57.661 [2024-07-24 19:09:03.625640] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:57.661 [2024-07-24 19:09:03.625669] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:57.661 [2024-07-24 19:09:03.625681] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:57.661 [2024-07-24 19:09:03.626469] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:57.661 [2024-07-24 19:09:03.626506] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:57.661 [2024-07-24 19:09:03.626523] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:57.661 [2024-07-24 19:09:03.627469] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:57.661 [2024-07-24 19:09:03.627499] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:57.661 [2024-07-24 19:09:03.627521] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:57.661 [2024-07-24 19:09:03.631500] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:57.661 [2024-07-24 19:09:03.631523] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:57.661 [2024-07-24 19:09:03.632506] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:57.661 [2024-07-24 19:09:03.632528] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:57.661 [2024-07-24 19:09:03.632539] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:57.661 [2024-07-24 19:09:03.632553] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:57.661 [2024-07-24 19:09:03.632664] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:57.662 [2024-07-24 19:09:03.632674] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:57.662 [2024-07-24 19:09:03.632684] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:57.662 [2024-07-24 19:09:03.633571] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:57.662 [2024-07-24 19:09:03.634529] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:57.662 [2024-07-24 19:09:03.635546] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:57.662 [2024-07-24 19:09:03.636544] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:57.662 [2024-07-24 19:09:03.636619] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:57.662 [2024-07-24 19:09:03.637562] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:57.662 [2024-07-24 19:09:03.637585] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:57.662 [2024-07-24 19:09:03.637596] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:57.662 [2024-07-24 19:09:03.637625] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:57.662 [2024-07-24 19:09:03.637640] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:57.662 [2024-07-24 19:09:03.637666] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:57.662 [2024-07-24 19:09:03.637677] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:57.662 [2024-07-24 19:09:03.637685] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:57.662 [2024-07-24 19:09:03.637706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:57.662 [2024-07-24 19:09:03.643498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:57.662 [2024-07-24 19:09:03.643528] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:57.662 [2024-07-24 19:09:03.643540] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:57.662 [2024-07-24 19:09:03.643549] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:57.662 [2024-07-24 19:09:03.643558] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:57.662 [2024-07-24 19:09:03.643567] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:57.662 [2024-07-24 19:09:03.643576] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:57.662 [2024-07-24 19:09:03.643586] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:57.662 [2024-07-24 19:09:03.643601] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:57.662 [2024-07-24 19:09:03.643624] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:57.662 [2024-07-24 19:09:03.651492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:57.662 [2024-07-24 19:09:03.651523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.662 [2024-07-24 19:09:03.651540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.662 [2024-07-24 19:09:03.651555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.662 [2024-07-24 19:09:03.651569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.662 [2024-07-24 19:09:03.651579] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:57.662 [2024-07-24 19:09:03.651598] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:57.662 [2024-07-24 19:09:03.651615] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:57.662 [2024-07-24 19:09:03.659504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:57.662 [2024-07-24 19:09:03.659523] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:57.662 [2024-07-24 19:09:03.659533] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:57.662 [2024-07-24 19:09:03.659551] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:57.662 [2024-07-24 19:09:03.659564] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:57.662 [2024-07-24 19:09:03.659580] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:57.662 [2024-07-24 19:09:03.667494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:57.662 [2024-07-24 19:09:03.667582] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:57.662 [2024-07-24 19:09:03.667606] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:57.662 [2024-07-24 19:09:03.667622] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:57.662 [2024-07-24 19:09:03.667632] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:57.662 [2024-07-24 19:09:03.667639] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:57.662 [2024-07-24 19:09:03.667650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:57.920 [2024-07-24 19:09:03.675504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:57.920 [2024-07-24 19:09:03.675536] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:57.920 [2024-07-24 19:09:03.675562] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:57.920 [2024-07-24 19:09:03.675581] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:57.920 [2024-07-24 19:09:03.675596] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:57.920 [2024-07-24 19:09:03.675606] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:57.920 [2024-07-24 19:09:03.675613] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:57.920 [2024-07-24 19:09:03.675626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:57.921 [2024-07-24 19:09:03.683494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:57.921 [2024-07-24 19:09:03.683532] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:57.921 [2024-07-24 19:09:03.683551] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:57.921 [2024-07-24 19:09:03.683568] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:57.921 [2024-07-24 19:09:03.683578] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:57.921 [2024-07-24 19:09:03.683586] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:57.921 [2024-07-24 19:09:03.683598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:57.921 [2024-07-24 19:09:03.691500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:57.921 [2024-07-24 19:09:03.691524] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:57.921 [2024-07-24 19:09:03.691539] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:57.921 [2024-07-24 19:09:03.691558] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:57.921 [2024-07-24 19:09:03.691574] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:11:57.921 [2024-07-24 19:09:03.691585] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:57.921 [2024-07-24 19:09:03.691599] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:57.921 [2024-07-24 19:09:03.691610] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:57.921 [2024-07-24 19:09:03.691619] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:57.921 [2024-07-24 19:09:03.691629] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:57.921 [2024-07-24 19:09:03.691658] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:57.921 [2024-07-24 19:09:03.699495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:57.921 [2024-07-24 19:09:03.699523] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:57.921 [2024-07-24 19:09:03.707498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:57.921 [2024-07-24 19:09:03.707527] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:57.921 [2024-07-24 19:09:03.715494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:57.921 [2024-07-24 19:09:03.715522] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:57.921 [2024-07-24 19:09:03.723494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:57.921 [2024-07-24 19:09:03.723532] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:57.921 [2024-07-24 19:09:03.723544] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:57.921 [2024-07-24 19:09:03.723552] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:57.921 [2024-07-24 19:09:03.723559] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:57.921 [2024-07-24 19:09:03.723566] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:11:57.921 [2024-07-24 19:09:03.723577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:57.921 [2024-07-24 19:09:03.723591] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:57.921 [2024-07-24 19:09:03.723601] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:57.921 [2024-07-24 19:09:03.723608] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:57.921 [2024-07-24 19:09:03.723619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:57.921 [2024-07-24 19:09:03.723632] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:57.921 [2024-07-24 19:09:03.723641] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:57.921 [2024-07-24 19:09:03.723648] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:57.921 [2024-07-24 19:09:03.723659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:57.921 [2024-07-24 19:09:03.723674] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:57.921 [2024-07-24 19:09:03.723683] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:57.921 [2024-07-24 19:09:03.723694] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:57.921 [2024-07-24 19:09:03.723706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:57.921 [2024-07-24 19:09:03.731494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:57.921 [2024-07-24 19:09:03.731524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:57.921 [2024-07-24 19:09:03.731544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:57.921 [2024-07-24 19:09:03.731559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:57.921 ===================================================== 00:11:57.921 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:57.921 ===================================================== 00:11:57.921 Controller Capabilities/Features 00:11:57.921 ================================ 00:11:57.921 Vendor ID: 4e58 00:11:57.921 Subsystem Vendor ID: 4e58 00:11:57.921 Serial Number: SPDK2 00:11:57.921 Model Number: SPDK bdev Controller 00:11:57.921 Firmware Version: 24.09 00:11:57.921 Recommended Arb Burst: 6 00:11:57.921 IEEE OUI Identifier: 8d 6b 50 00:11:57.921 Multi-path I/O 00:11:57.921 May have multiple subsystem ports: Yes 00:11:57.921 May have multiple controllers: Yes 00:11:57.921 Associated with SR-IOV VF: No 00:11:57.921 Max Data Transfer Size: 131072 00:11:57.921 Max Number of Namespaces: 32 00:11:57.921 Max Number of I/O Queues: 127 00:11:57.921 NVMe Specification Version (VS): 1.3 00:11:57.921 NVMe Specification Version (Identify): 1.3 00:11:57.921 Maximum Queue Entries: 256 00:11:57.921 Contiguous Queues Required: Yes 00:11:57.921 Arbitration Mechanisms Supported 00:11:57.921 Weighted Round Robin: Not Supported 00:11:57.921 Vendor Specific: Not Supported 00:11:57.921 Reset Timeout: 15000 ms 00:11:57.921 Doorbell Stride: 4 bytes 00:11:57.921 NVM Subsystem Reset: Not Supported 00:11:57.921 Command Sets Supported 00:11:57.921 NVM Command Set: Supported 00:11:57.921 Boot Partition: Not Supported 00:11:57.921 Memory Page Size Minimum: 4096 bytes 00:11:57.921 Memory Page Size Maximum: 4096 bytes 00:11:57.921 Persistent Memory Region: Not Supported 00:11:57.921 Optional Asynchronous Events Supported 00:11:57.921 Namespace Attribute Notices: Supported 00:11:57.921 Firmware Activation Notices: Not Supported 00:11:57.921 ANA Change Notices: Not Supported 00:11:57.921 PLE Aggregate Log Change Notices: Not Supported 00:11:57.921 LBA Status Info Alert Notices: Not Supported 00:11:57.921 EGE Aggregate Log Change Notices: Not Supported 00:11:57.921 Normal NVM Subsystem Shutdown event: Not Supported 00:11:57.921 Zone Descriptor Change Notices: Not Supported 00:11:57.921 Discovery Log Change Notices: Not Supported 00:11:57.921 Controller Attributes 00:11:57.921 128-bit Host Identifier: Supported 00:11:57.921 Non-Operational Permissive Mode: Not Supported 00:11:57.921 NVM Sets: Not Supported 00:11:57.921 Read Recovery Levels: Not Supported 00:11:57.921 Endurance Groups: Not Supported 00:11:57.921 Predictable Latency Mode: Not Supported 00:11:57.921 Traffic Based Keep ALive: Not Supported 00:11:57.921 Namespace Granularity: Not Supported 00:11:57.921 SQ Associations: Not Supported 00:11:57.921 UUID List: Not Supported 00:11:57.921 Multi-Domain Subsystem: Not Supported 00:11:57.921 Fixed Capacity Management: Not Supported 00:11:57.921 Variable Capacity Management: Not Supported 00:11:57.921 Delete Endurance Group: Not Supported 00:11:57.921 Delete NVM Set: Not Supported 00:11:57.921 Extended LBA Formats Supported: Not Supported 00:11:57.921 Flexible Data Placement Supported: Not Supported 00:11:57.921 00:11:57.921 Controller Memory Buffer Support 00:11:57.921 ================================ 00:11:57.921 Supported: No 00:11:57.921 00:11:57.921 Persistent Memory Region Support 00:11:57.921 ================================ 00:11:57.921 Supported: No 00:11:57.921 00:11:57.921 Admin Command Set Attributes 00:11:57.921 ============================ 00:11:57.922 Security Send/Receive: Not Supported 00:11:57.922 Format NVM: Not Supported 00:11:57.922 Firmware Activate/Download: Not Supported 00:11:57.922 Namespace Management: Not Supported 00:11:57.922 Device Self-Test: Not Supported 00:11:57.922 Directives: Not Supported 00:11:57.922 NVMe-MI: Not Supported 00:11:57.922 Virtualization Management: Not Supported 00:11:57.922 Doorbell Buffer Config: Not Supported 00:11:57.922 Get LBA Status Capability: Not Supported 00:11:57.922 Command & Feature Lockdown Capability: Not Supported 00:11:57.922 Abort Command Limit: 4 00:11:57.922 Async Event Request Limit: 4 00:11:57.922 Number of Firmware Slots: N/A 00:11:57.922 Firmware Slot 1 Read-Only: N/A 00:11:57.922 Firmware Activation Without Reset: N/A 00:11:57.922 Multiple Update Detection Support: N/A 00:11:57.922 Firmware Update Granularity: No Information Provided 00:11:57.922 Per-Namespace SMART Log: No 00:11:57.922 Asymmetric Namespace Access Log Page: Not Supported 00:11:57.922 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:57.922 Command Effects Log Page: Supported 00:11:57.922 Get Log Page Extended Data: Supported 00:11:57.922 Telemetry Log Pages: Not Supported 00:11:57.922 Persistent Event Log Pages: Not Supported 00:11:57.922 Supported Log Pages Log Page: May Support 00:11:57.922 Commands Supported & Effects Log Page: Not Supported 00:11:57.922 Feature Identifiers & Effects Log Page:May Support 00:11:57.922 NVMe-MI Commands & Effects Log Page: May Support 00:11:57.922 Data Area 4 for Telemetry Log: Not Supported 00:11:57.922 Error Log Page Entries Supported: 128 00:11:57.922 Keep Alive: Supported 00:11:57.922 Keep Alive Granularity: 10000 ms 00:11:57.922 00:11:57.922 NVM Command Set Attributes 00:11:57.922 ========================== 00:11:57.922 Submission Queue Entry Size 00:11:57.922 Max: 64 00:11:57.922 Min: 64 00:11:57.922 Completion Queue Entry Size 00:11:57.922 Max: 16 00:11:57.922 Min: 16 00:11:57.922 Number of Namespaces: 32 00:11:57.922 Compare Command: Supported 00:11:57.922 Write Uncorrectable Command: Not Supported 00:11:57.922 Dataset Management Command: Supported 00:11:57.922 Write Zeroes Command: Supported 00:11:57.922 Set Features Save Field: Not Supported 00:11:57.922 Reservations: Not Supported 00:11:57.922 Timestamp: Not Supported 00:11:57.922 Copy: Supported 00:11:57.922 Volatile Write Cache: Present 00:11:57.922 Atomic Write Unit (Normal): 1 00:11:57.922 Atomic Write Unit (PFail): 1 00:11:57.922 Atomic Compare & Write Unit: 1 00:11:57.922 Fused Compare & Write: Supported 00:11:57.922 Scatter-Gather List 00:11:57.922 SGL Command Set: Supported (Dword aligned) 00:11:57.922 SGL Keyed: Not Supported 00:11:57.922 SGL Bit Bucket Descriptor: Not Supported 00:11:57.922 SGL Metadata Pointer: Not Supported 00:11:57.922 Oversized SGL: Not Supported 00:11:57.922 SGL Metadata Address: Not Supported 00:11:57.922 SGL Offset: Not Supported 00:11:57.922 Transport SGL Data Block: Not Supported 00:11:57.922 Replay Protected Memory Block: Not Supported 00:11:57.922 00:11:57.922 Firmware Slot Information 00:11:57.922 ========================= 00:11:57.922 Active slot: 1 00:11:57.922 Slot 1 Firmware Revision: 24.09 00:11:57.922 00:11:57.922 00:11:57.922 Commands Supported and Effects 00:11:57.922 ============================== 00:11:57.922 Admin Commands 00:11:57.922 -------------- 00:11:57.922 Get Log Page (02h): Supported 00:11:57.922 Identify (06h): Supported 00:11:57.922 Abort (08h): Supported 00:11:57.922 Set Features (09h): Supported 00:11:57.922 Get Features (0Ah): Supported 00:11:57.922 Asynchronous Event Request (0Ch): Supported 00:11:57.922 Keep Alive (18h): Supported 00:11:57.922 I/O Commands 00:11:57.922 ------------ 00:11:57.922 Flush (00h): Supported LBA-Change 00:11:57.922 Write (01h): Supported LBA-Change 00:11:57.922 Read (02h): Supported 00:11:57.922 Compare (05h): Supported 00:11:57.922 Write Zeroes (08h): Supported LBA-Change 00:11:57.922 Dataset Management (09h): Supported LBA-Change 00:11:57.922 Copy (19h): Supported LBA-Change 00:11:57.922 00:11:57.922 Error Log 00:11:57.922 ========= 00:11:57.922 00:11:57.922 Arbitration 00:11:57.922 =========== 00:11:57.922 Arbitration Burst: 1 00:11:57.922 00:11:57.922 Power Management 00:11:57.922 ================ 00:11:57.922 Number of Power States: 1 00:11:57.922 Current Power State: Power State #0 00:11:57.922 Power State #0: 00:11:57.922 Max Power: 0.00 W 00:11:57.922 Non-Operational State: Operational 00:11:57.922 Entry Latency: Not Reported 00:11:57.922 Exit Latency: Not Reported 00:11:57.922 Relative Read Throughput: 0 00:11:57.922 Relative Read Latency: 0 00:11:57.922 Relative Write Throughput: 0 00:11:57.922 Relative Write Latency: 0 00:11:57.922 Idle Power: Not Reported 00:11:57.922 Active Power: Not Reported 00:11:57.922 Non-Operational Permissive Mode: Not Supported 00:11:57.922 00:11:57.922 Health Information 00:11:57.922 ================== 00:11:57.922 Critical Warnings: 00:11:57.922 Available Spare Space: OK 00:11:57.922 Temperature: OK 00:11:57.922 Device Reliability: OK 00:11:57.922 Read Only: No 00:11:57.922 Volatile Memory Backup: OK 00:11:57.922 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:57.922 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:57.922 Available Spare: 0% 00:11:57.922 Available Sp[2024-07-24 19:09:03.731699] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:57.922 [2024-07-24 19:09:03.739497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:57.922 [2024-07-24 19:09:03.739551] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:57.922 [2024-07-24 19:09:03.739572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.922 [2024-07-24 19:09:03.739585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.922 [2024-07-24 19:09:03.739597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.922 [2024-07-24 19:09:03.739608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.922 [2024-07-24 19:09:03.739682] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:57.922 [2024-07-24 19:09:03.739706] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:57.922 [2024-07-24 19:09:03.740691] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:57.922 [2024-07-24 19:09:03.740772] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:57.922 [2024-07-24 19:09:03.740789] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:57.922 [2024-07-24 19:09:03.741699] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:57.922 [2024-07-24 19:09:03.741726] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:11:57.922 [2024-07-24 19:09:03.741800] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:57.922 [2024-07-24 19:09:03.744494] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:57.922 are Threshold: 0% 00:11:57.922 Life Percentage Used: 0% 00:11:57.922 Data Units Read: 0 00:11:57.922 Data Units Written: 0 00:11:57.922 Host Read Commands: 0 00:11:57.922 Host Write Commands: 0 00:11:57.922 Controller Busy Time: 0 minutes 00:11:57.923 Power Cycles: 0 00:11:57.923 Power On Hours: 0 hours 00:11:57.923 Unsafe Shutdowns: 0 00:11:57.923 Unrecoverable Media Errors: 0 00:11:57.923 Lifetime Error Log Entries: 0 00:11:57.923 Warning Temperature Time: 0 minutes 00:11:57.923 Critical Temperature Time: 0 minutes 00:11:57.923 00:11:57.923 Number of Queues 00:11:57.923 ================ 00:11:57.923 Number of I/O Submission Queues: 127 00:11:57.923 Number of I/O Completion Queues: 127 00:11:57.923 00:11:57.923 Active Namespaces 00:11:57.923 ================= 00:11:57.923 Namespace ID:1 00:11:57.923 Error Recovery Timeout: Unlimited 00:11:57.923 Command Set Identifier: NVM (00h) 00:11:57.923 Deallocate: Supported 00:11:57.923 Deallocated/Unwritten Error: Not Supported 00:11:57.923 Deallocated Read Value: Unknown 00:11:57.923 Deallocate in Write Zeroes: Not Supported 00:11:57.923 Deallocated Guard Field: 0xFFFF 00:11:57.923 Flush: Supported 00:11:57.923 Reservation: Supported 00:11:57.923 Namespace Sharing Capabilities: Multiple Controllers 00:11:57.923 Size (in LBAs): 131072 (0GiB) 00:11:57.923 Capacity (in LBAs): 131072 (0GiB) 00:11:57.923 Utilization (in LBAs): 131072 (0GiB) 00:11:57.923 NGUID: 33F6B12AEE14462F99EEC4BE5F8B3EE9 00:11:57.923 UUID: 33f6b12a-ee14-462f-99ee-c4be5f8b3ee9 00:11:57.923 Thin Provisioning: Not Supported 00:11:57.923 Per-NS Atomic Units: Yes 00:11:57.923 Atomic Boundary Size (Normal): 0 00:11:57.923 Atomic Boundary Size (PFail): 0 00:11:57.923 Atomic Boundary Offset: 0 00:11:57.923 Maximum Single Source Range Length: 65535 00:11:57.923 Maximum Copy Length: 65535 00:11:57.923 Maximum Source Range Count: 1 00:11:57.923 NGUID/EUI64 Never Reused: No 00:11:57.923 Namespace Write Protected: No 00:11:57.923 Number of LBA Formats: 1 00:11:57.923 Current LBA Format: LBA Format #00 00:11:57.923 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:57.923 00:11:57.923 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:57.923 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.181 [2024-07-24 19:09:03.967582] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:03.442 Initializing NVMe Controllers 00:12:03.442 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:03.442 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:03.442 Initialization complete. Launching workers. 00:12:03.442 ======================================================== 00:12:03.442 Latency(us) 00:12:03.442 Device Information : IOPS MiB/s Average min max 00:12:03.442 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24066.56 94.01 5317.92 1474.63 7625.05 00:12:03.442 ======================================================== 00:12:03.442 Total : 24066.56 94.01 5317.92 1474.63 7625.05 00:12:03.442 00:12:03.442 [2024-07-24 19:09:09.070788] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:03.442 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:03.442 EAL: No free 2048 kB hugepages reported on node 1 00:12:03.442 [2024-07-24 19:09:09.314496] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:08.705 Initializing NVMe Controllers 00:12:08.705 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:08.705 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:08.705 Initialization complete. Launching workers. 00:12:08.705 ======================================================== 00:12:08.705 Latency(us) 00:12:08.705 Device Information : IOPS MiB/s Average min max 00:12:08.705 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24069.59 94.02 5317.87 1490.11 10548.55 00:12:08.705 ======================================================== 00:12:08.705 Total : 24069.59 94.02 5317.87 1490.11 10548.55 00:12:08.705 00:12:08.705 [2024-07-24 19:09:14.336862] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:08.705 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:08.705 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.705 [2024-07-24 19:09:14.568507] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:14.045 [2024-07-24 19:09:19.702633] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:14.045 Initializing NVMe Controllers 00:12:14.045 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:14.045 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:14.045 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:14.045 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:14.045 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:14.045 Initialization complete. Launching workers. 00:12:14.045 Starting thread on core 2 00:12:14.045 Starting thread on core 3 00:12:14.045 Starting thread on core 1 00:12:14.046 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:14.046 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.046 [2024-07-24 19:09:20.003990] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:18.239 [2024-07-24 19:09:23.398722] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:18.239 Initializing NVMe Controllers 00:12:18.239 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:18.239 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:18.239 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:18.239 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:18.239 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:18.239 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:18.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:18.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:18.240 Initialization complete. Launching workers. 00:12:18.240 Starting thread on core 1 with urgent priority queue 00:12:18.240 Starting thread on core 2 with urgent priority queue 00:12:18.240 Starting thread on core 3 with urgent priority queue 00:12:18.240 Starting thread on core 0 with urgent priority queue 00:12:18.240 SPDK bdev Controller (SPDK2 ) core 0: 7163.33 IO/s 13.96 secs/100000 ios 00:12:18.240 SPDK bdev Controller (SPDK2 ) core 1: 6966.67 IO/s 14.35 secs/100000 ios 00:12:18.240 SPDK bdev Controller (SPDK2 ) core 2: 6607.67 IO/s 15.13 secs/100000 ios 00:12:18.240 SPDK bdev Controller (SPDK2 ) core 3: 6935.00 IO/s 14.42 secs/100000 ios 00:12:18.240 ======================================================== 00:12:18.240 00:12:18.240 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:18.240 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.240 [2024-07-24 19:09:23.672375] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:18.240 Initializing NVMe Controllers 00:12:18.240 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:18.240 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:18.240 Namespace ID: 1 size: 0GB 00:12:18.240 Initialization complete. 00:12:18.240 INFO: using host memory buffer for IO 00:12:18.240 Hello world! 00:12:18.240 [2024-07-24 19:09:23.680492] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:18.240 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:18.240 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.240 [2024-07-24 19:09:23.966554] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:19.175 Initializing NVMe Controllers 00:12:19.175 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:19.175 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:19.175 Initialization complete. Launching workers. 00:12:19.175 submit (in ns) avg, min, max = 9405.5, 4518.5, 5003318.5 00:12:19.175 complete (in ns) avg, min, max = 30314.7, 2632.6, 5005823.7 00:12:19.175 00:12:19.175 Submit histogram 00:12:19.175 ================ 00:12:19.175 Range in us Cumulative Count 00:12:19.175 4.504 - 4.527: 0.0085% ( 1) 00:12:19.175 4.527 - 4.551: 0.2305% ( 26) 00:12:19.175 4.551 - 4.575: 1.1866% ( 112) 00:12:19.175 4.575 - 4.599: 3.8330% ( 310) 00:12:19.175 4.599 - 4.622: 7.8453% ( 470) 00:12:19.175 4.622 - 4.646: 12.9589% ( 599) 00:12:19.175 4.646 - 4.670: 17.1590% ( 492) 00:12:19.175 4.670 - 4.693: 19.4554% ( 269) 00:12:19.175 4.693 - 4.717: 20.5139% ( 124) 00:12:19.175 4.717 - 4.741: 21.2395% ( 85) 00:12:19.175 4.741 - 4.764: 22.9810% ( 204) 00:12:19.175 4.764 - 4.788: 26.2592% ( 384) 00:12:19.175 4.788 - 4.812: 30.6386% ( 513) 00:12:19.175 4.812 - 4.836: 34.9667% ( 507) 00:12:19.175 4.836 - 4.859: 38.2961% ( 390) 00:12:19.175 4.859 - 4.883: 39.8754% ( 185) 00:12:19.175 4.883 - 4.907: 40.3790% ( 59) 00:12:19.175 4.907 - 4.930: 40.7632% ( 45) 00:12:19.175 4.930 - 4.954: 41.1217% ( 42) 00:12:19.175 4.954 - 4.978: 41.5656% ( 52) 00:12:19.175 4.978 - 5.001: 42.0096% ( 52) 00:12:19.175 5.001 - 5.025: 42.4535% ( 52) 00:12:19.175 5.025 - 5.049: 42.8291% ( 44) 00:12:19.175 5.049 - 5.073: 43.0254% ( 23) 00:12:19.175 5.073 - 5.096: 43.1706% ( 17) 00:12:19.175 5.096 - 5.120: 43.2730% ( 12) 00:12:19.175 5.120 - 5.144: 43.3840% ( 13) 00:12:19.175 5.144 - 5.167: 43.5889% ( 24) 00:12:19.175 5.167 - 5.191: 44.3999% ( 95) 00:12:19.175 5.191 - 5.215: 46.8926% ( 292) 00:12:19.175 5.215 - 5.239: 50.2903% ( 398) 00:12:19.175 5.239 - 5.262: 54.1574% ( 453) 00:12:19.175 5.262 - 5.286: 56.6160% ( 288) 00:12:19.175 5.286 - 5.310: 57.4782% ( 101) 00:12:19.175 5.310 - 5.333: 58.2551% ( 91) 00:12:19.175 5.333 - 5.357: 60.3722% ( 248) 00:12:19.175 5.357 - 5.381: 63.7528% ( 396) 00:12:19.175 5.381 - 5.404: 67.3382% ( 420) 00:12:19.175 5.404 - 5.428: 70.5310% ( 374) 00:12:19.175 5.428 - 5.452: 71.8798% ( 158) 00:12:19.175 5.452 - 5.476: 73.2115% ( 156) 00:12:19.175 5.476 - 5.499: 74.3640% ( 135) 00:12:19.175 5.499 - 5.523: 75.1067% ( 87) 00:12:19.175 5.523 - 5.547: 75.3970% ( 34) 00:12:19.175 5.547 - 5.570: 75.7384% ( 40) 00:12:19.175 5.570 - 5.594: 78.4531% ( 318) 00:12:19.175 5.594 - 5.618: 83.2423% ( 561) 00:12:19.175 5.618 - 5.641: 88.8766% ( 660) 00:12:19.175 5.641 - 5.665: 92.7010% ( 448) 00:12:19.175 5.665 - 5.689: 93.6657% ( 113) 00:12:19.175 5.689 - 5.713: 94.2035% ( 63) 00:12:19.175 5.713 - 5.736: 94.4938% ( 34) 00:12:19.175 5.736 - 5.760: 94.6389% ( 17) 00:12:19.175 5.760 - 5.784: 94.7413% ( 12) 00:12:19.175 5.784 - 5.807: 94.8096% ( 8) 00:12:19.175 5.807 - 5.831: 94.9206% ( 13) 00:12:19.175 5.831 - 5.855: 95.0316% ( 13) 00:12:19.175 5.855 - 5.879: 95.2535% ( 26) 00:12:19.175 5.879 - 5.902: 95.3987% ( 17) 00:12:19.175 5.902 - 5.926: 95.5779% ( 21) 00:12:19.175 5.926 - 5.950: 95.6975% ( 14) 00:12:19.175 5.950 - 5.973: 95.7658% ( 8) 00:12:19.175 5.973 - 5.997: 95.8682% ( 12) 00:12:19.175 5.997 - 6.021: 95.9536% ( 10) 00:12:19.175 6.021 - 6.044: 96.0475% ( 11) 00:12:19.175 6.044 - 6.068: 96.1158% ( 8) 00:12:19.175 6.068 - 6.116: 96.2353% ( 14) 00:12:19.175 6.116 - 6.163: 96.3206% ( 10) 00:12:19.175 6.163 - 6.210: 96.3804% ( 7) 00:12:19.175 6.210 - 6.258: 96.4487% ( 8) 00:12:19.175 6.258 - 6.305: 96.4914% ( 5) 00:12:19.175 6.305 - 6.353: 96.6024% ( 13) 00:12:19.175 6.353 - 6.400: 96.7560% ( 18) 00:12:19.175 6.400 - 6.447: 96.8585% ( 12) 00:12:19.175 6.447 - 6.495: 96.9524% ( 11) 00:12:19.175 6.495 - 6.542: 97.0975% ( 17) 00:12:19.175 6.542 - 6.590: 97.2426% ( 17) 00:12:19.175 6.590 - 6.637: 97.2853% ( 5) 00:12:19.175 6.637 - 6.684: 97.3536% ( 8) 00:12:19.175 6.684 - 6.732: 97.3963% ( 5) 00:12:19.175 6.732 - 6.779: 97.4219% ( 3) 00:12:19.175 6.779 - 6.827: 97.4816% ( 7) 00:12:19.175 6.827 - 6.874: 97.5414% ( 7) 00:12:19.175 6.874 - 6.921: 97.7719% ( 27) 00:12:19.175 6.921 - 6.969: 98.4463% ( 79) 00:12:19.175 6.969 - 7.016: 98.9329% ( 57) 00:12:19.175 7.016 - 7.064: 99.1036% ( 20) 00:12:19.175 7.064 - 7.111: 99.1805% ( 9) 00:12:19.175 7.111 - 7.159: 99.2146% ( 4) 00:12:19.175 7.159 - 7.206: 99.2317% ( 2) 00:12:19.175 7.680 - 7.727: 99.2402% ( 1) 00:12:19.175 7.917 - 7.964: 99.2488% ( 1) 00:12:19.175 8.107 - 8.154: 99.2573% ( 1) 00:12:19.175 8.201 - 8.249: 99.2744% ( 2) 00:12:19.175 8.249 - 8.296: 99.2829% ( 1) 00:12:19.175 8.486 - 8.533: 99.2914% ( 1) 00:12:19.175 8.581 - 8.628: 99.3085% ( 2) 00:12:19.175 8.628 - 8.676: 99.3171% ( 1) 00:12:19.175 8.723 - 8.770: 99.3341% ( 2) 00:12:19.175 8.770 - 8.818: 99.3427% ( 1) 00:12:19.175 8.865 - 8.913: 99.3597% ( 2) 00:12:19.175 9.102 - 9.150: 99.3768% ( 2) 00:12:19.175 9.292 - 9.339: 99.3939% ( 2) 00:12:19.175 9.339 - 9.387: 99.4024% ( 1) 00:12:19.175 9.387 - 9.434: 99.4195% ( 2) 00:12:19.175 9.481 - 9.529: 99.4280% ( 1) 00:12:19.175 9.529 - 9.576: 99.4451% ( 2) 00:12:19.175 9.576 - 9.624: 99.4536% ( 1) 00:12:19.175 9.861 - 9.908: 99.4622% ( 1) 00:12:19.176 10.003 - 10.050: 99.4793% ( 2) 00:12:19.176 10.145 - 10.193: 99.5049% ( 3) 00:12:19.176 10.240 - 10.287: 99.5219% ( 2) 00:12:19.176 10.287 - 10.335: 99.5305% ( 1) 00:12:19.176 10.335 - 10.382: 99.5390% ( 1) 00:12:19.176 10.382 - 10.430: 99.5475% ( 1) 00:12:19.176 10.430 - 10.477: 99.5561% ( 1) 00:12:19.176 10.714 - 10.761: 99.5732% ( 2) 00:12:19.176 10.761 - 10.809: 99.5817% ( 1) 00:12:19.176 10.951 - 10.999: 99.5902% ( 1) 00:12:19.176 10.999 - 11.046: 99.5988% ( 1) 00:12:19.176 11.046 - 11.093: 99.6073% ( 1) 00:12:19.176 11.141 - 11.188: 99.6158% ( 1) 00:12:19.176 11.188 - 11.236: 99.6244% ( 1) 00:12:19.176 11.330 - 11.378: 99.6329% ( 1) 00:12:19.176 11.378 - 11.425: 99.6415% ( 1) 00:12:19.176 11.425 - 11.473: 99.6500% ( 1) 00:12:19.176 11.804 - 11.852: 99.6671% ( 2) 00:12:19.176 11.899 - 11.947: 99.6756% ( 1) 00:12:19.176 12.231 - 12.326: 99.6927% ( 2) 00:12:19.176 12.421 - 12.516: 99.7012% ( 1) 00:12:19.176 12.516 - 12.610: 99.7097% ( 1) 00:12:19.176 12.610 - 12.705: 99.7183% ( 1) 00:12:19.176 12.705 - 12.800: 99.7268% ( 1) 00:12:19.176 12.895 - 12.990: 99.7354% ( 1) 00:12:19.176 12.990 - 13.084: 99.7524% ( 2) 00:12:19.176 13.179 - 13.274: 99.7610% ( 1) 00:12:19.176 13.559 - 13.653: 99.7695% ( 1) 00:12:19.176 13.653 - 13.748: 99.7780% ( 1) 00:12:19.176 13.748 - 13.843: 99.8037% ( 3) 00:12:19.176 13.843 - 13.938: 99.8207% ( 2) 00:12:19.176 13.938 - 14.033: 99.8378% ( 2) 00:12:19.176 14.033 - 14.127: 99.8634% ( 3) 00:12:19.176 14.317 - 14.412: 99.8719% ( 1) 00:12:19.176 14.791 - 14.886: 99.8805% ( 1) 00:12:19.176 15.550 - 15.644: 99.8890% ( 1) 00:12:19.176 16.119 - 16.213: 99.8976% ( 1) 00:12:19.176 3980.705 - 4004.978: 99.9402% ( 5) 00:12:19.176 4004.978 - 4029.250: 99.9915% ( 6) 00:12:19.176 5000.154 - 5024.427: 100.0000% ( 1) 00:12:19.176 00:12:19.176 Complete histogram 00:12:19.176 ================== 00:12:19.176 Range in us Cumulative Count 00:12:19.176 2.631 - 2.643: 0.4268% ( 50) 00:12:19.176 2.643 - 2.655: 11.7637% ( 1328) 00:12:19.176 2.655 - 2.667: 39.3717% ( 3234) 00:12:19.176 2.667 - 2.679: 47.3536% ( 935) 00:12:19.176 2.679 - 2.690: 56.1380% ( 1029) 00:12:19.176 2.690 - 2.702: 77.5653% ( 2510) 00:12:19.176 2.702 - 2.714: 88.2619% ( 1253) 00:12:19.176 2.714 - 2.726: 92.1120% ( 451) 00:12:19.176 2.726 - 2.738: 94.8609% ( 322) 00:12:19.176 2.738 - 2.750: 96.6109% ( 205) 00:12:19.176 2.750 - 2.761: 97.3792% ( 90) 00:12:19.176 2.761 - 2.773: 97.6951% ( 37) 00:12:19.176 2.773 - 2.785: 97.8231% ( 15) 00:12:19.176 2.785 - 2.797: 97.8658% ( 5) 00:12:19.176 2.797 - 2.809: 97.8999% ( 4) 00:12:19.176 2.809 - 2.8[2024-07-24 19:09:25.071767] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:19.176 21: 97.9256% ( 3) 00:12:19.176 2.833 - 2.844: 97.9341% ( 1) 00:12:19.176 2.844 - 2.856: 97.9512% ( 2) 00:12:19.176 2.856 - 2.868: 97.9597% ( 1) 00:12:19.176 2.868 - 2.880: 98.0024% ( 5) 00:12:19.176 2.880 - 2.892: 98.0195% ( 2) 00:12:19.176 2.892 - 2.904: 98.0536% ( 4) 00:12:19.176 2.904 - 2.916: 98.0621% ( 1) 00:12:19.176 2.916 - 2.927: 98.1048% ( 5) 00:12:19.176 2.927 - 2.939: 98.1304% ( 3) 00:12:19.176 2.939 - 2.951: 98.1390% ( 1) 00:12:19.176 2.951 - 2.963: 98.1561% ( 2) 00:12:19.176 2.963 - 2.975: 98.1731% ( 2) 00:12:19.176 2.975 - 2.987: 98.1817% ( 1) 00:12:19.176 3.022 - 3.034: 98.1987% ( 2) 00:12:19.176 3.058 - 3.081: 98.2158% ( 2) 00:12:19.176 3.105 - 3.129: 98.2329% ( 2) 00:12:19.176 3.129 - 3.153: 98.2500% ( 2) 00:12:19.176 3.153 - 3.176: 98.2756% ( 3) 00:12:19.176 3.176 - 3.200: 98.3012% ( 3) 00:12:19.176 3.200 - 3.224: 98.3183% ( 2) 00:12:19.176 3.224 - 3.247: 98.3695% ( 6) 00:12:19.176 3.247 - 3.271: 98.4036% ( 4) 00:12:19.176 3.271 - 3.295: 98.4292% ( 3) 00:12:19.176 3.295 - 3.319: 98.4805% ( 6) 00:12:19.176 3.319 - 3.342: 98.4890% ( 1) 00:12:19.176 3.342 - 3.366: 98.5146% ( 3) 00:12:19.176 3.366 - 3.390: 98.5487% ( 4) 00:12:19.176 3.390 - 3.413: 98.5914% ( 5) 00:12:19.176 3.413 - 3.437: 98.6426% ( 6) 00:12:19.176 3.437 - 3.461: 98.6597% ( 2) 00:12:19.176 3.461 - 3.484: 98.6768% ( 2) 00:12:19.176 3.484 - 3.508: 98.7024% ( 3) 00:12:19.176 3.508 - 3.532: 98.7195% ( 2) 00:12:19.176 3.532 - 3.556: 98.7451% ( 3) 00:12:19.176 3.556 - 3.579: 98.7622% ( 2) 00:12:19.176 3.579 - 3.603: 98.7707% ( 1) 00:12:19.176 3.603 - 3.627: 98.8134% ( 5) 00:12:19.176 3.627 - 3.650: 98.8646% ( 6) 00:12:19.176 3.650 - 3.674: 98.8817% ( 2) 00:12:19.176 3.674 - 3.698: 98.8902% ( 1) 00:12:19.176 3.698 - 3.721: 98.9500% ( 7) 00:12:19.176 3.721 - 3.745: 98.9670% ( 2) 00:12:19.176 3.745 - 3.769: 98.9927% ( 3) 00:12:19.176 3.769 - 3.793: 99.0012% ( 1) 00:12:19.176 3.816 - 3.840: 99.0268% ( 3) 00:12:19.176 3.840 - 3.864: 99.0353% ( 1) 00:12:19.176 3.864 - 3.887: 99.0524% ( 2) 00:12:19.176 3.911 - 3.935: 99.0610% ( 1) 00:12:19.176 3.935 - 3.959: 99.0780% ( 2) 00:12:19.176 3.959 - 3.982: 99.0951% ( 2) 00:12:19.176 4.053 - 4.077: 99.1036% ( 1) 00:12:19.176 4.243 - 4.267: 99.1122% ( 1) 00:12:19.176 4.693 - 4.717: 99.1207% ( 1) 00:12:19.176 4.764 - 4.788: 99.1292% ( 1) 00:12:19.176 5.191 - 5.215: 99.1378% ( 1) 00:12:19.176 6.044 - 6.068: 99.1463% ( 1) 00:12:19.176 6.447 - 6.495: 99.1549% ( 1) 00:12:19.176 6.637 - 6.684: 99.1719% ( 2) 00:12:19.176 6.732 - 6.779: 99.1805% ( 1) 00:12:19.176 6.779 - 6.827: 99.1890% ( 1) 00:12:19.176 6.921 - 6.969: 99.1975% ( 1) 00:12:19.176 7.111 - 7.159: 99.2061% ( 1) 00:12:19.176 7.348 - 7.396: 99.2146% ( 1) 00:12:19.176 7.633 - 7.680: 99.2232% ( 1) 00:12:19.176 8.012 - 8.059: 99.2317% ( 1) 00:12:19.176 8.486 - 8.533: 99.2402% ( 1) 00:12:19.176 8.533 - 8.581: 99.2573% ( 2) 00:12:19.176 9.007 - 9.055: 99.2658% ( 1) 00:12:19.176 9.055 - 9.102: 99.2744% ( 1) 00:12:19.176 9.719 - 9.766: 99.2829% ( 1) 00:12:19.176 10.003 - 10.050: 99.2914% ( 1) 00:12:19.176 15.076 - 15.170: 99.3000% ( 1) 00:12:19.176 15.455 - 15.550: 99.3085% ( 1) 00:12:19.176 2342.305 - 2354.441: 99.3171% ( 1) 00:12:19.176 3980.705 - 4004.978: 99.7524% ( 51) 00:12:19.176 4004.978 - 4029.250: 99.9915% ( 28) 00:12:19.176 5000.154 - 5024.427: 100.0000% ( 1) 00:12:19.176 00:12:19.176 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:19.176 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:19.176 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:19.176 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:19.176 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:19.434 [ 00:12:19.434 { 00:12:19.434 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:19.434 "subtype": "Discovery", 00:12:19.434 "listen_addresses": [], 00:12:19.434 "allow_any_host": true, 00:12:19.434 "hosts": [] 00:12:19.434 }, 00:12:19.434 { 00:12:19.434 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:19.434 "subtype": "NVMe", 00:12:19.434 "listen_addresses": [ 00:12:19.434 { 00:12:19.434 "trtype": "VFIOUSER", 00:12:19.434 "adrfam": "IPv4", 00:12:19.434 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:19.434 "trsvcid": "0" 00:12:19.434 } 00:12:19.434 ], 00:12:19.434 "allow_any_host": true, 00:12:19.434 "hosts": [], 00:12:19.434 "serial_number": "SPDK1", 00:12:19.434 "model_number": "SPDK bdev Controller", 00:12:19.434 "max_namespaces": 32, 00:12:19.434 "min_cntlid": 1, 00:12:19.434 "max_cntlid": 65519, 00:12:19.434 "namespaces": [ 00:12:19.434 { 00:12:19.434 "nsid": 1, 00:12:19.434 "bdev_name": "Malloc1", 00:12:19.434 "name": "Malloc1", 00:12:19.434 "nguid": "193CCF3387B24516A9889A5D424B2FBC", 00:12:19.434 "uuid": "193ccf33-87b2-4516-a988-9a5d424b2fbc" 00:12:19.434 }, 00:12:19.434 { 00:12:19.434 "nsid": 2, 00:12:19.434 "bdev_name": "Malloc3", 00:12:19.434 "name": "Malloc3", 00:12:19.435 "nguid": "D354F842B77B448A846A8FCCAB0562D8", 00:12:19.435 "uuid": "d354f842-b77b-448a-846a-8fccab0562d8" 00:12:19.435 } 00:12:19.435 ] 00:12:19.435 }, 00:12:19.435 { 00:12:19.435 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:19.435 "subtype": "NVMe", 00:12:19.435 "listen_addresses": [ 00:12:19.435 { 00:12:19.435 "trtype": "VFIOUSER", 00:12:19.435 "adrfam": "IPv4", 00:12:19.435 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:19.435 "trsvcid": "0" 00:12:19.435 } 00:12:19.435 ], 00:12:19.435 "allow_any_host": true, 00:12:19.435 "hosts": [], 00:12:19.435 "serial_number": "SPDK2", 00:12:19.435 "model_number": "SPDK bdev Controller", 00:12:19.435 "max_namespaces": 32, 00:12:19.435 "min_cntlid": 1, 00:12:19.435 "max_cntlid": 65519, 00:12:19.435 "namespaces": [ 00:12:19.435 { 00:12:19.435 "nsid": 1, 00:12:19.435 "bdev_name": "Malloc2", 00:12:19.435 "name": "Malloc2", 00:12:19.435 "nguid": "33F6B12AEE14462F99EEC4BE5F8B3EE9", 00:12:19.435 "uuid": "33f6b12a-ee14-462f-99ee-c4be5f8b3ee9" 00:12:19.435 } 00:12:19.435 ] 00:12:19.435 } 00:12:19.435 ] 00:12:19.435 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:19.435 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2546208 00:12:19.435 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:19.435 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:19.435 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:19.435 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:19.435 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:19.435 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:19.435 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:19.435 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:19.692 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.692 [2024-07-24 19:09:25.601030] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:19.950 Malloc4 00:12:19.950 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:20.208 [2024-07-24 19:09:26.046499] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:20.208 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:20.208 Asynchronous Event Request test 00:12:20.208 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:20.208 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:20.208 Registering asynchronous event callbacks... 00:12:20.208 Starting namespace attribute notice tests for all controllers... 00:12:20.208 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:20.208 aer_cb - Changed Namespace 00:12:20.208 Cleaning up... 00:12:20.467 [ 00:12:20.467 { 00:12:20.467 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:20.467 "subtype": "Discovery", 00:12:20.467 "listen_addresses": [], 00:12:20.467 "allow_any_host": true, 00:12:20.467 "hosts": [] 00:12:20.467 }, 00:12:20.467 { 00:12:20.467 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:20.467 "subtype": "NVMe", 00:12:20.467 "listen_addresses": [ 00:12:20.467 { 00:12:20.467 "trtype": "VFIOUSER", 00:12:20.467 "adrfam": "IPv4", 00:12:20.467 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:20.467 "trsvcid": "0" 00:12:20.467 } 00:12:20.467 ], 00:12:20.467 "allow_any_host": true, 00:12:20.467 "hosts": [], 00:12:20.467 "serial_number": "SPDK1", 00:12:20.467 "model_number": "SPDK bdev Controller", 00:12:20.467 "max_namespaces": 32, 00:12:20.467 "min_cntlid": 1, 00:12:20.467 "max_cntlid": 65519, 00:12:20.467 "namespaces": [ 00:12:20.467 { 00:12:20.467 "nsid": 1, 00:12:20.467 "bdev_name": "Malloc1", 00:12:20.467 "name": "Malloc1", 00:12:20.467 "nguid": "193CCF3387B24516A9889A5D424B2FBC", 00:12:20.467 "uuid": "193ccf33-87b2-4516-a988-9a5d424b2fbc" 00:12:20.467 }, 00:12:20.467 { 00:12:20.467 "nsid": 2, 00:12:20.467 "bdev_name": "Malloc3", 00:12:20.467 "name": "Malloc3", 00:12:20.467 "nguid": "D354F842B77B448A846A8FCCAB0562D8", 00:12:20.467 "uuid": "d354f842-b77b-448a-846a-8fccab0562d8" 00:12:20.467 } 00:12:20.467 ] 00:12:20.467 }, 00:12:20.467 { 00:12:20.467 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:20.467 "subtype": "NVMe", 00:12:20.468 "listen_addresses": [ 00:12:20.468 { 00:12:20.468 "trtype": "VFIOUSER", 00:12:20.468 "adrfam": "IPv4", 00:12:20.468 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:20.468 "trsvcid": "0" 00:12:20.468 } 00:12:20.468 ], 00:12:20.468 "allow_any_host": true, 00:12:20.468 "hosts": [], 00:12:20.468 "serial_number": "SPDK2", 00:12:20.468 "model_number": "SPDK bdev Controller", 00:12:20.468 "max_namespaces": 32, 00:12:20.468 "min_cntlid": 1, 00:12:20.468 "max_cntlid": 65519, 00:12:20.468 "namespaces": [ 00:12:20.468 { 00:12:20.468 "nsid": 1, 00:12:20.468 "bdev_name": "Malloc2", 00:12:20.468 "name": "Malloc2", 00:12:20.468 "nguid": "33F6B12AEE14462F99EEC4BE5F8B3EE9", 00:12:20.468 "uuid": "33f6b12a-ee14-462f-99ee-c4be5f8b3ee9" 00:12:20.468 }, 00:12:20.468 { 00:12:20.468 "nsid": 2, 00:12:20.468 "bdev_name": "Malloc4", 00:12:20.468 "name": "Malloc4", 00:12:20.468 "nguid": "BFF211FCC07A4F42B5185262DA2F5900", 00:12:20.468 "uuid": "bff211fc-c07a-4f42-b518-5262da2f5900" 00:12:20.468 } 00:12:20.468 ] 00:12:20.468 } 00:12:20.468 ] 00:12:20.468 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2546208 00:12:20.468 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:20.468 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2541141 00:12:20.468 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2541141 ']' 00:12:20.468 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2541141 00:12:20.468 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:12:20.468 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:20.468 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2541141 00:12:20.468 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:20.468 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:20.468 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2541141' 00:12:20.468 killing process with pid 2541141 00:12:20.468 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2541141 00:12:20.468 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2541141 00:12:20.727 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:20.727 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:20.727 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:20.727 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:20.727 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:20.727 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2546320 00:12:20.727 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:20.727 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2546320' 00:12:20.727 Process pid: 2546320 00:12:20.727 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:20.727 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2546320 00:12:20.727 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2546320 ']' 00:12:20.727 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.727 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:20.727 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.727 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:20.727 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:20.727 [2024-07-24 19:09:26.725748] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:20.727 [2024-07-24 19:09:26.727022] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:12:20.727 [2024-07-24 19:09:26.727086] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.985 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.985 [2024-07-24 19:09:26.791742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.985 [2024-07-24 19:09:26.911808] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.985 [2024-07-24 19:09:26.911872] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.985 [2024-07-24 19:09:26.911887] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.985 [2024-07-24 19:09:26.911901] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.985 [2024-07-24 19:09:26.911912] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.985 [2024-07-24 19:09:26.911971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.985 [2024-07-24 19:09:26.912023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.985 [2024-07-24 19:09:26.912076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.985 [2024-07-24 19:09:26.912073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.244 [2024-07-24 19:09:27.009619] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:21.244 [2024-07-24 19:09:27.009797] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:21.244 [2024-07-24 19:09:27.010010] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:21.244 [2024-07-24 19:09:27.010518] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:21.244 [2024-07-24 19:09:27.010777] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:21.244 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:21.244 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:12:21.244 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:22.177 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:22.436 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:22.436 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:22.436 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:22.436 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:22.436 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:22.696 Malloc1 00:12:22.696 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:22.955 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:23.521 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:23.778 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:23.778 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:23.778 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:24.036 Malloc2 00:12:24.036 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:24.294 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:24.552 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:24.810 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:24.810 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2546320 00:12:24.810 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2546320 ']' 00:12:24.810 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2546320 00:12:24.810 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:12:24.810 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:24.810 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2546320 00:12:24.810 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:24.810 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:24.810 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2546320' 00:12:24.810 killing process with pid 2546320 00:12:24.810 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2546320 00:12:24.810 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2546320 00:12:25.069 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:25.069 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:25.069 00:12:25.069 real 0m53.775s 00:12:25.069 user 3m32.270s 00:12:25.069 sys 0m4.345s 00:12:25.069 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.069 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:25.069 ************************************ 00:12:25.069 END TEST nvmf_vfio_user 00:12:25.069 ************************************ 00:12:25.069 19:09:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:25.069 19:09:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:25.069 19:09:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.069 19:09:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:25.328 ************************************ 00:12:25.328 START TEST nvmf_vfio_user_nvme_compliance 00:12:25.328 ************************************ 00:12:25.328 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:25.328 * Looking for test storage... 00:12:25.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:25.328 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.328 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:25.328 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.328 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.328 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.328 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.328 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.328 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.328 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.328 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.328 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.328 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.328 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:25.328 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:25.328 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.328 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.328 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.328 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.328 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2546788 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2546788' 00:12:25.329 Process pid: 2546788 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2546788 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 2546788 ']' 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:25.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:25.329 [2024-07-24 19:09:31.232012] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:12:25.329 [2024-07-24 19:09:31.232116] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.329 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.329 [2024-07-24 19:09:31.296289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:25.587 [2024-07-24 19:09:31.412898] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.587 [2024-07-24 19:09:31.412965] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.587 [2024-07-24 19:09:31.412981] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.587 [2024-07-24 19:09:31.412994] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.587 [2024-07-24 19:09:31.413005] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.587 [2024-07-24 19:09:31.413102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.587 [2024-07-24 19:09:31.413163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.587 [2024-07-24 19:09:31.413167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.587 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:25.587 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:12:25.587 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:26.519 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:26.519 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:26.519 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:26.519 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.519 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:26.519 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.519 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:26.519 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:26.519 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.519 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:26.777 malloc0 00:12:26.777 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.777 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:26.777 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.777 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:26.777 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.777 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:26.777 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.777 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:26.777 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.777 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:26.777 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.777 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:26.777 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.777 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:26.777 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.777 00:12:26.777 00:12:26.777 CUnit - A unit testing framework for C - Version 2.1-3 00:12:26.777 http://cunit.sourceforge.net/ 00:12:26.777 00:12:26.777 00:12:26.777 Suite: nvme_compliance 00:12:26.777 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-24 19:09:32.756068] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:26.777 [2024-07-24 19:09:32.757615] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:26.777 [2024-07-24 19:09:32.757643] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:26.777 [2024-07-24 19:09:32.757658] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:26.777 [2024-07-24 19:09:32.759090] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.034 passed 00:12:27.034 Test: admin_identify_ctrlr_verify_fused ...[2024-07-24 19:09:32.866841] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.034 [2024-07-24 19:09:32.869871] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.034 passed 00:12:27.034 Test: admin_identify_ns ...[2024-07-24 19:09:32.980322] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.034 [2024-07-24 19:09:33.040502] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:27.034 [2024-07-24 19:09:33.048503] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:27.291 [2024-07-24 19:09:33.069663] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.291 passed 00:12:27.291 Test: admin_get_features_mandatory_features ...[2024-07-24 19:09:33.176303] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.291 [2024-07-24 19:09:33.179335] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.291 passed 00:12:27.291 Test: admin_get_features_optional_features ...[2024-07-24 19:09:33.277967] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.291 [2024-07-24 19:09:33.281002] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.556 passed 00:12:27.556 Test: admin_set_features_number_of_queues ...[2024-07-24 19:09:33.389755] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.556 [2024-07-24 19:09:33.494637] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.556 passed 00:12:27.813 Test: admin_get_log_page_mandatory_logs ...[2024-07-24 19:09:33.598246] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.813 [2024-07-24 19:09:33.601273] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.813 passed 00:12:27.813 Test: admin_get_log_page_with_lpo ...[2024-07-24 19:09:33.699404] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.813 [2024-07-24 19:09:33.765493] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:27.813 [2024-07-24 19:09:33.778613] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.813 passed 00:12:28.070 Test: fabric_property_get ...[2024-07-24 19:09:33.880685] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.070 [2024-07-24 19:09:33.882025] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:28.070 [2024-07-24 19:09:33.883711] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.070 passed 00:12:28.070 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-24 19:09:33.986390] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.070 [2024-07-24 19:09:33.987741] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:28.070 [2024-07-24 19:09:33.989419] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.070 passed 00:12:28.327 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-24 19:09:34.086710] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.327 [2024-07-24 19:09:34.171493] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:28.327 [2024-07-24 19:09:34.187500] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:28.327 [2024-07-24 19:09:34.192618] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.327 passed 00:12:28.327 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-24 19:09:34.295122] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.327 [2024-07-24 19:09:34.296458] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:28.327 [2024-07-24 19:09:34.298150] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.327 passed 00:12:28.585 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-24 19:09:34.396770] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.585 [2024-07-24 19:09:34.475502] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:28.585 [2024-07-24 19:09:34.499490] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:28.585 [2024-07-24 19:09:34.504655] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.585 passed 00:12:28.842 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-24 19:09:34.612441] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.842 [2024-07-24 19:09:34.613800] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:28.842 [2024-07-24 19:09:34.613839] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:28.842 [2024-07-24 19:09:34.615476] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.842 passed 00:12:28.842 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-24 19:09:34.723919] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.842 [2024-07-24 19:09:34.816504] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:28.842 [2024-07-24 19:09:34.824489] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:28.842 [2024-07-24 19:09:34.832508] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:28.842 [2024-07-24 19:09:34.840496] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:29.100 [2024-07-24 19:09:34.869644] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:29.100 passed 00:12:29.100 Test: admin_create_io_sq_verify_pc ...[2024-07-24 19:09:34.972229] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:29.100 [2024-07-24 19:09:34.988528] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:29.100 [2024-07-24 19:09:35.005809] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:29.100 passed 00:12:29.100 Test: admin_create_io_qp_max_qps ...[2024-07-24 19:09:35.114531] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:30.472 [2024-07-24 19:09:36.205522] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:30.730 [2024-07-24 19:09:36.590110] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:30.730 passed 00:12:30.730 Test: admin_create_io_sq_shared_cq ...[2024-07-24 19:09:36.688604] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:30.989 [2024-07-24 19:09:36.822489] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:30.989 [2024-07-24 19:09:36.859584] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:30.989 passed 00:12:30.989 00:12:30.989 Run Summary: Type Total Ran Passed Failed Inactive 00:12:30.989 suites 1 1 n/a 0 0 00:12:30.989 tests 18 18 18 0 0 00:12:30.989 asserts 360 360 360 0 n/a 00:12:30.989 00:12:30.989 Elapsed time = 1.735 seconds 00:12:30.989 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2546788 00:12:30.989 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 2546788 ']' 00:12:30.989 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 2546788 00:12:30.989 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:12:30.989 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:30.989 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2546788 00:12:30.989 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:30.989 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:30.989 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2546788' 00:12:30.989 killing process with pid 2546788 00:12:30.989 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 2546788 00:12:30.989 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 2546788 00:12:31.247 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:31.247 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:31.247 00:12:31.247 real 0m6.076s 00:12:31.247 user 0m17.017s 00:12:31.247 sys 0m0.554s 00:12:31.247 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:31.247 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:31.247 ************************************ 00:12:31.247 END TEST nvmf_vfio_user_nvme_compliance 00:12:31.247 ************************************ 00:12:31.247 19:09:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:31.247 19:09:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:31.247 19:09:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:31.247 19:09:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:31.247 ************************************ 00:12:31.247 START TEST nvmf_vfio_user_fuzz 00:12:31.247 ************************************ 00:12:31.247 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:31.506 * Looking for test storage... 00:12:31.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.506 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.507 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.507 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.507 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.507 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:31.507 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:31.507 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:31.507 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:31.507 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:31.507 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:31.507 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:31.507 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2547446 00:12:31.507 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:31.507 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2547446' 00:12:31.507 Process pid: 2547446 00:12:31.507 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:31.507 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2547446 00:12:31.507 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2547446 ']' 00:12:31.507 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.507 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:31.507 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.507 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:31.507 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:31.765 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:31.765 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:12:31.765 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:32.699 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:32.699 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.699 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:32.699 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.699 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:32.699 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:32.699 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.699 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:32.699 malloc0 00:12:32.699 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.699 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:32.699 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.699 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:32.699 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.699 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:32.699 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.699 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:32.699 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.699 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:32.699 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.699 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:32.958 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.958 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:32.958 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:05.077 Fuzzing completed. Shutting down the fuzz application 00:13:05.077 00:13:05.077 Dumping successful admin opcodes: 00:13:05.077 8, 9, 10, 24, 00:13:05.077 Dumping successful io opcodes: 00:13:05.077 0, 00:13:05.077 NS: 0x200003a1ef00 I/O qp, Total commands completed: 579373, total successful commands: 2225, random_seed: 2420587392 00:13:05.077 NS: 0x200003a1ef00 admin qp, Total commands completed: 141923, total successful commands: 1150, random_seed: 1925827520 00:13:05.077 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:05.077 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.077 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:05.077 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.077 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2547446 00:13:05.077 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2547446 ']' 00:13:05.077 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 2547446 00:13:05.077 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:13:05.077 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:05.077 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2547446 00:13:05.077 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:05.078 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:05.078 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2547446' 00:13:05.078 killing process with pid 2547446 00:13:05.078 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 2547446 00:13:05.078 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 2547446 00:13:05.078 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:05.078 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:05.078 00:13:05.078 real 0m32.711s 00:13:05.078 user 0m33.986s 00:13:05.078 sys 0m26.135s 00:13:05.078 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:05.078 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:05.078 ************************************ 00:13:05.078 END TEST nvmf_vfio_user_fuzz 00:13:05.078 ************************************ 00:13:05.078 19:10:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:05.078 19:10:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:05.078 19:10:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:05.078 19:10:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:05.078 ************************************ 00:13:05.078 START TEST nvmf_auth_target 00:13:05.078 ************************************ 00:13:05.078 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:05.078 * Looking for test storage... 00:13:05.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:05.078 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:06.017 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:06.017 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:06.018 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:06.018 Found net devices under 0000:08:00.0: cvl_0_0 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:06.018 Found net devices under 0000:08:00.1: cvl_0_1 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:06.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:13:06.018 00:13:06.018 --- 10.0.0.2 ping statistics --- 00:13:06.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.018 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:06.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:13:06.018 00:13:06.018 --- 10.0.0.1 ping statistics --- 00:13:06.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.018 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2551604 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2551604 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2551604 ']' 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:06.018 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2551653 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3e62e5164e0aa20e8855c1bb84e931d387fd951b300f22c0 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.0fk 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3e62e5164e0aa20e8855c1bb84e931d387fd951b300f22c0 0 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3e62e5164e0aa20e8855c1bb84e931d387fd951b300f22c0 0 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3e62e5164e0aa20e8855c1bb84e931d387fd951b300f22c0 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.0fk 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.0fk 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.0fk 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:06.277 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:06.278 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:06.278 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:06.278 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:06.278 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7a3a400621e39b73e606b96dfd9b8fd81f24efc74e652aa29ccba1e20eac88d2 00:13:06.278 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:06.278 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.IfS 00:13:06.278 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7a3a400621e39b73e606b96dfd9b8fd81f24efc74e652aa29ccba1e20eac88d2 3 00:13:06.278 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7a3a400621e39b73e606b96dfd9b8fd81f24efc74e652aa29ccba1e20eac88d2 3 00:13:06.278 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:06.278 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:06.278 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7a3a400621e39b73e606b96dfd9b8fd81f24efc74e652aa29ccba1e20eac88d2 00:13:06.278 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:06.278 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:06.536 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.IfS 00:13:06.536 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.IfS 00:13:06.536 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.IfS 00:13:06.536 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:13:06.536 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:06.536 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:06.536 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f6fea43e35a66a7018719ae628fcf59d 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.CSY 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f6fea43e35a66a7018719ae628fcf59d 1 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f6fea43e35a66a7018719ae628fcf59d 1 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f6fea43e35a66a7018719ae628fcf59d 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.CSY 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.CSY 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.CSY 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=91397fea2ba10d85b147ffe77b9efaa5f7e4acb3bf4d668b 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Fib 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 91397fea2ba10d85b147ffe77b9efaa5f7e4acb3bf4d668b 2 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 91397fea2ba10d85b147ffe77b9efaa5f7e4acb3bf4d668b 2 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=91397fea2ba10d85b147ffe77b9efaa5f7e4acb3bf4d668b 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Fib 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Fib 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Fib 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e0122c96284d5802503efbfb3356bb5c2cb7a9d658ecb081 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.JaM 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e0122c96284d5802503efbfb3356bb5c2cb7a9d658ecb081 2 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e0122c96284d5802503efbfb3356bb5c2cb7a9d658ecb081 2 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e0122c96284d5802503efbfb3356bb5c2cb7a9d658ecb081 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.JaM 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.JaM 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.JaM 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9bef0939f9db60486d8f44419e7e73f3 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Dtt 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9bef0939f9db60486d8f44419e7e73f3 1 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9bef0939f9db60486d8f44419e7e73f3 1 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9bef0939f9db60486d8f44419e7e73f3 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:06.537 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Dtt 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Dtt 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Dtt 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c8c94c5f2a75392d9eb0df2cda0c592e6a7b9e60e4e63ae62adddccc908fe1e4 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.VHG 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c8c94c5f2a75392d9eb0df2cda0c592e6a7b9e60e4e63ae62adddccc908fe1e4 3 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c8c94c5f2a75392d9eb0df2cda0c592e6a7b9e60e4e63ae62adddccc908fe1e4 3 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c8c94c5f2a75392d9eb0df2cda0c592e6a7b9e60e4e63ae62adddccc908fe1e4 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.VHG 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.VHG 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.VHG 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2551604 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2551604 ']' 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:06.796 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.055 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:07.055 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:07.055 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2551653 /var/tmp/host.sock 00:13:07.055 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2551653 ']' 00:13:07.055 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:13:07.055 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:07.055 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:07.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:07.055 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:07.055 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.313 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:07.313 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:07.313 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:13:07.313 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.313 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.313 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.313 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:07.313 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.0fk 00:13:07.313 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.313 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.313 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.313 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.0fk 00:13:07.313 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.0fk 00:13:07.573 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.IfS ]] 00:13:07.573 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IfS 00:13:07.573 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.573 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.573 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.573 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IfS 00:13:07.573 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IfS 00:13:07.832 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:07.832 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.CSY 00:13:07.832 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.832 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.832 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.832 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.CSY 00:13:07.832 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.CSY 00:13:08.090 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Fib ]] 00:13:08.090 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Fib 00:13:08.090 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.090 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.090 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.090 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Fib 00:13:08.090 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Fib 00:13:08.347 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:08.347 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.JaM 00:13:08.348 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.348 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.348 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.348 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.JaM 00:13:08.348 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.JaM 00:13:08.605 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Dtt ]] 00:13:08.605 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Dtt 00:13:08.605 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.605 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.605 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.605 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Dtt 00:13:08.605 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Dtt 00:13:08.863 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:08.863 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.VHG 00:13:08.863 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.863 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.863 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.863 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.VHG 00:13:08.863 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.VHG 00:13:09.121 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:13:09.121 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:09.121 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:09.121 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:09.121 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:09.121 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:09.379 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:13:09.379 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:09.379 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:09.379 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:09.379 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:09.379 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.379 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.379 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.379 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.379 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.379 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.379 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.637 00:13:09.637 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:09.637 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.637 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:09.895 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.895 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.895 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.895 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.895 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.895 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:09.895 { 00:13:09.895 "cntlid": 1, 00:13:09.895 "qid": 0, 00:13:09.895 "state": "enabled", 00:13:09.895 "thread": "nvmf_tgt_poll_group_000", 00:13:09.895 "listen_address": { 00:13:09.895 "trtype": "TCP", 00:13:09.895 "adrfam": "IPv4", 00:13:09.895 "traddr": "10.0.0.2", 00:13:09.895 "trsvcid": "4420" 00:13:09.895 }, 00:13:09.895 "peer_address": { 00:13:09.895 "trtype": "TCP", 00:13:09.895 "adrfam": "IPv4", 00:13:09.895 "traddr": "10.0.0.1", 00:13:09.895 "trsvcid": "37106" 00:13:09.895 }, 00:13:09.895 "auth": { 00:13:09.895 "state": "completed", 00:13:09.895 "digest": "sha256", 00:13:09.895 "dhgroup": "null" 00:13:09.895 } 00:13:09.895 } 00:13:09.895 ]' 00:13:09.895 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:09.895 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:09.895 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:10.153 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:10.153 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:10.153 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.153 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.153 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.411 19:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:M2U2MmU1MTY0ZTBhYTIwZTg4NTVjMWJiODRlOTMxZDM4N2ZkOTUxYjMwMGYyMmMw34xEuQ==: --dhchap-ctrl-secret DHHC-1:03:N2EzYTQwMDYyMWUzOWI3M2U2MDZiOTZkZmQ5YjhmZDgxZjI0ZWZjNzRlNjUyYWEyOWNjYmExZTIwZWFjODhkMtVSbO4=: 00:13:11.784 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.784 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:11.784 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.784 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.784 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.784 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:11.784 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:11.784 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:11.784 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:13:11.784 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:11.784 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:11.784 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:11.784 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:11.784 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.784 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.784 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.784 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.784 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.784 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.784 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.042 00:13:12.299 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:12.299 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.299 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:12.557 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.557 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.557 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.557 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.557 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.557 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:12.557 { 00:13:12.557 "cntlid": 3, 00:13:12.557 "qid": 0, 00:13:12.557 "state": "enabled", 00:13:12.557 "thread": "nvmf_tgt_poll_group_000", 00:13:12.557 "listen_address": { 00:13:12.557 "trtype": "TCP", 00:13:12.557 "adrfam": "IPv4", 00:13:12.557 "traddr": "10.0.0.2", 00:13:12.557 "trsvcid": "4420" 00:13:12.557 }, 00:13:12.557 "peer_address": { 00:13:12.557 "trtype": "TCP", 00:13:12.557 "adrfam": "IPv4", 00:13:12.557 "traddr": "10.0.0.1", 00:13:12.557 "trsvcid": "37142" 00:13:12.557 }, 00:13:12.557 "auth": { 00:13:12.557 "state": "completed", 00:13:12.557 "digest": "sha256", 00:13:12.557 "dhgroup": "null" 00:13:12.557 } 00:13:12.557 } 00:13:12.557 ]' 00:13:12.557 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:12.557 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:12.558 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:12.558 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:12.558 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:12.558 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.558 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.558 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.815 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjZmZWE0M2UzNWE2NmE3MDE4NzE5YWU2MjhmY2Y1OWQUgtom: --dhchap-ctrl-secret DHHC-1:02:OTEzOTdmZWEyYmExMGQ4NWIxNDdmZmU3N2I5ZWZhYTVmN2U0YWNiM2JmNGQ2NjhiOWQYvA==: 00:13:14.187 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.187 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:14.187 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.187 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.187 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.187 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:14.187 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:14.187 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:14.445 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:13:14.445 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:14.445 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:14.445 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:14.445 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:14.445 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.445 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.445 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.445 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.445 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.445 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.445 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.703 00:13:14.703 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:14.703 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:14.703 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.961 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.961 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.961 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.961 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.219 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.219 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:15.219 { 00:13:15.219 "cntlid": 5, 00:13:15.219 "qid": 0, 00:13:15.219 "state": "enabled", 00:13:15.219 "thread": "nvmf_tgt_poll_group_000", 00:13:15.219 "listen_address": { 00:13:15.219 "trtype": "TCP", 00:13:15.219 "adrfam": "IPv4", 00:13:15.219 "traddr": "10.0.0.2", 00:13:15.219 "trsvcid": "4420" 00:13:15.219 }, 00:13:15.219 "peer_address": { 00:13:15.219 "trtype": "TCP", 00:13:15.219 "adrfam": "IPv4", 00:13:15.219 "traddr": "10.0.0.1", 00:13:15.219 "trsvcid": "37162" 00:13:15.219 }, 00:13:15.219 "auth": { 00:13:15.219 "state": "completed", 00:13:15.219 "digest": "sha256", 00:13:15.219 "dhgroup": "null" 00:13:15.219 } 00:13:15.219 } 00:13:15.219 ]' 00:13:15.219 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:15.219 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:15.219 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:15.219 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:15.219 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:15.219 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.219 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.219 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.477 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZTAxMjJjOTYyODRkNTgwMjUwM2VmYmZiMzM1NmJiNWMyY2I3YTlkNjU4ZWNiMDgxls2F4g==: --dhchap-ctrl-secret DHHC-1:01:OWJlZjA5MzlmOWRiNjA0ODZkOGY0NDQxOWU3ZTczZjMmn7gr: 00:13:16.851 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.851 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:16.851 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.851 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.851 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.851 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:16.851 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:16.851 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:17.109 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:13:17.109 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:17.109 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:17.109 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:17.109 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:17.109 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.109 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:13:17.109 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.109 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.109 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.109 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:17.109 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:17.367 00:13:17.367 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:17.367 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:17.367 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.624 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.624 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.624 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.624 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.624 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.624 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:17.624 { 00:13:17.624 "cntlid": 7, 00:13:17.624 "qid": 0, 00:13:17.624 "state": "enabled", 00:13:17.624 "thread": "nvmf_tgt_poll_group_000", 00:13:17.624 "listen_address": { 00:13:17.624 "trtype": "TCP", 00:13:17.624 "adrfam": "IPv4", 00:13:17.624 "traddr": "10.0.0.2", 00:13:17.624 "trsvcid": "4420" 00:13:17.624 }, 00:13:17.624 "peer_address": { 00:13:17.624 "trtype": "TCP", 00:13:17.624 "adrfam": "IPv4", 00:13:17.624 "traddr": "10.0.0.1", 00:13:17.624 "trsvcid": "37190" 00:13:17.624 }, 00:13:17.624 "auth": { 00:13:17.624 "state": "completed", 00:13:17.624 "digest": "sha256", 00:13:17.624 "dhgroup": "null" 00:13:17.624 } 00:13:17.624 } 00:13:17.624 ]' 00:13:17.624 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:17.880 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:17.880 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:17.880 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:17.880 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:17.880 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.880 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.880 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.136 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YzhjOTRjNWYyYTc1MzkyZDllYjBkZjJjZGEwYzU5MmU2YTdiOWU2MGU0ZTYzYWU2MmFkZGRjY2M5MDhmZTFlNLl9NnQ=: 00:13:19.509 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.509 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:19.509 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.509 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.509 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.509 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:19.509 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:19.509 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:19.509 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:19.766 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:13:19.767 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:19.767 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:19.767 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:19.767 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:19.767 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.767 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.767 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.767 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.767 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.767 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.767 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.024 00:13:20.024 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:20.024 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:20.024 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.282 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.282 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.282 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.282 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.282 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.282 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:20.282 { 00:13:20.282 "cntlid": 9, 00:13:20.282 "qid": 0, 00:13:20.282 "state": "enabled", 00:13:20.282 "thread": "nvmf_tgt_poll_group_000", 00:13:20.282 "listen_address": { 00:13:20.282 "trtype": "TCP", 00:13:20.282 "adrfam": "IPv4", 00:13:20.282 "traddr": "10.0.0.2", 00:13:20.282 "trsvcid": "4420" 00:13:20.282 }, 00:13:20.282 "peer_address": { 00:13:20.282 "trtype": "TCP", 00:13:20.282 "adrfam": "IPv4", 00:13:20.282 "traddr": "10.0.0.1", 00:13:20.282 "trsvcid": "58006" 00:13:20.282 }, 00:13:20.282 "auth": { 00:13:20.282 "state": "completed", 00:13:20.282 "digest": "sha256", 00:13:20.282 "dhgroup": "ffdhe2048" 00:13:20.282 } 00:13:20.282 } 00:13:20.282 ]' 00:13:20.282 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:20.282 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:20.282 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:20.540 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:20.540 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:20.540 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.540 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.540 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.798 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:M2U2MmU1MTY0ZTBhYTIwZTg4NTVjMWJiODRlOTMxZDM4N2ZkOTUxYjMwMGYyMmMw34xEuQ==: --dhchap-ctrl-secret DHHC-1:03:N2EzYTQwMDYyMWUzOWI3M2U2MDZiOTZkZmQ5YjhmZDgxZjI0ZWZjNzRlNjUyYWEyOWNjYmExZTIwZWFjODhkMtVSbO4=: 00:13:22.171 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.171 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:22.171 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.171 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.171 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.171 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:22.171 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:22.171 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:22.171 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:13:22.171 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:22.171 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:22.171 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:22.171 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:22.171 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.171 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.171 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.171 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.171 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.171 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.171 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.737 00:13:22.737 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:22.737 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:22.737 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.995 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.995 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.995 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.995 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.995 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.995 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:22.995 { 00:13:22.995 "cntlid": 11, 00:13:22.995 "qid": 0, 00:13:22.995 "state": "enabled", 00:13:22.995 "thread": "nvmf_tgt_poll_group_000", 00:13:22.995 "listen_address": { 00:13:22.995 "trtype": "TCP", 00:13:22.995 "adrfam": "IPv4", 00:13:22.995 "traddr": "10.0.0.2", 00:13:22.995 "trsvcid": "4420" 00:13:22.995 }, 00:13:22.995 "peer_address": { 00:13:22.995 "trtype": "TCP", 00:13:22.995 "adrfam": "IPv4", 00:13:22.995 "traddr": "10.0.0.1", 00:13:22.995 "trsvcid": "58024" 00:13:22.995 }, 00:13:22.995 "auth": { 00:13:22.995 "state": "completed", 00:13:22.995 "digest": "sha256", 00:13:22.995 "dhgroup": "ffdhe2048" 00:13:22.995 } 00:13:22.995 } 00:13:22.995 ]' 00:13:22.995 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:22.995 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:22.995 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:22.995 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:22.995 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:22.995 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.995 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.995 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.253 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjZmZWE0M2UzNWE2NmE3MDE4NzE5YWU2MjhmY2Y1OWQUgtom: --dhchap-ctrl-secret DHHC-1:02:OTEzOTdmZWEyYmExMGQ4NWIxNDdmZmU3N2I5ZWZhYTVmN2U0YWNiM2JmNGQ2NjhiOWQYvA==: 00:13:24.628 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.628 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:24.628 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.628 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.628 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.628 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:24.628 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:24.628 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:24.887 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:13:24.887 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:24.887 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:24.887 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:24.887 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:24.887 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.887 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.887 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.887 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.887 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.887 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.887 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.145 00:13:25.145 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:25.145 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.145 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:25.403 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.403 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.403 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.403 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.403 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.403 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:25.403 { 00:13:25.403 "cntlid": 13, 00:13:25.403 "qid": 0, 00:13:25.403 "state": "enabled", 00:13:25.403 "thread": "nvmf_tgt_poll_group_000", 00:13:25.403 "listen_address": { 00:13:25.403 "trtype": "TCP", 00:13:25.403 "adrfam": "IPv4", 00:13:25.403 "traddr": "10.0.0.2", 00:13:25.403 "trsvcid": "4420" 00:13:25.403 }, 00:13:25.403 "peer_address": { 00:13:25.403 "trtype": "TCP", 00:13:25.403 "adrfam": "IPv4", 00:13:25.403 "traddr": "10.0.0.1", 00:13:25.403 "trsvcid": "58066" 00:13:25.403 }, 00:13:25.403 "auth": { 00:13:25.403 "state": "completed", 00:13:25.403 "digest": "sha256", 00:13:25.403 "dhgroup": "ffdhe2048" 00:13:25.403 } 00:13:25.403 } 00:13:25.403 ]' 00:13:25.403 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:25.403 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:25.403 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:25.660 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:25.660 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:25.660 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.660 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.660 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.918 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZTAxMjJjOTYyODRkNTgwMjUwM2VmYmZiMzM1NmJiNWMyY2I3YTlkNjU4ZWNiMDgxls2F4g==: --dhchap-ctrl-secret DHHC-1:01:OWJlZjA5MzlmOWRiNjA0ODZkOGY0NDQxOWU3ZTczZjMmn7gr: 00:13:27.292 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.292 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:27.292 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.292 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.292 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.292 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:27.292 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:27.292 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:27.292 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:13:27.292 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:27.292 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:27.292 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:27.292 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:27.292 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.292 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:13:27.292 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.292 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.292 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.292 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:27.292 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:27.870 00:13:27.870 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:27.870 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:27.870 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.127 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.127 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.127 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.127 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.127 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.127 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:28.127 { 00:13:28.127 "cntlid": 15, 00:13:28.127 "qid": 0, 00:13:28.127 "state": "enabled", 00:13:28.127 "thread": "nvmf_tgt_poll_group_000", 00:13:28.127 "listen_address": { 00:13:28.127 "trtype": "TCP", 00:13:28.127 "adrfam": "IPv4", 00:13:28.127 "traddr": "10.0.0.2", 00:13:28.127 "trsvcid": "4420" 00:13:28.127 }, 00:13:28.127 "peer_address": { 00:13:28.127 "trtype": "TCP", 00:13:28.127 "adrfam": "IPv4", 00:13:28.127 "traddr": "10.0.0.1", 00:13:28.127 "trsvcid": "58104" 00:13:28.127 }, 00:13:28.127 "auth": { 00:13:28.127 "state": "completed", 00:13:28.127 "digest": "sha256", 00:13:28.127 "dhgroup": "ffdhe2048" 00:13:28.127 } 00:13:28.127 } 00:13:28.127 ]' 00:13:28.127 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:28.127 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:28.127 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:28.127 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:28.127 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:28.127 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.127 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.127 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.384 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YzhjOTRjNWYyYTc1MzkyZDllYjBkZjJjZGEwYzU5MmU2YTdiOWU2MGU0ZTYzYWU2MmFkZGRjY2M5MDhmZTFlNLl9NnQ=: 00:13:29.779 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.779 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:29.779 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.779 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.779 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.779 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:29.779 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:29.779 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:29.779 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:30.037 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:13:30.037 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:30.037 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:30.037 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:30.037 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:30.037 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.037 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.037 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.037 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.037 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.037 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.037 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.294 00:13:30.294 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:30.294 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.294 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:30.552 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.552 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.552 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.552 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.552 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.552 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:30.552 { 00:13:30.552 "cntlid": 17, 00:13:30.552 "qid": 0, 00:13:30.552 "state": "enabled", 00:13:30.552 "thread": "nvmf_tgt_poll_group_000", 00:13:30.552 "listen_address": { 00:13:30.552 "trtype": "TCP", 00:13:30.552 "adrfam": "IPv4", 00:13:30.552 "traddr": "10.0.0.2", 00:13:30.552 "trsvcid": "4420" 00:13:30.552 }, 00:13:30.552 "peer_address": { 00:13:30.552 "trtype": "TCP", 00:13:30.552 "adrfam": "IPv4", 00:13:30.552 "traddr": "10.0.0.1", 00:13:30.552 "trsvcid": "38314" 00:13:30.552 }, 00:13:30.552 "auth": { 00:13:30.552 "state": "completed", 00:13:30.552 "digest": "sha256", 00:13:30.552 "dhgroup": "ffdhe3072" 00:13:30.552 } 00:13:30.552 } 00:13:30.552 ]' 00:13:30.552 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:30.809 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:30.809 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:30.809 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:30.809 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:30.809 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.809 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.809 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.066 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:M2U2MmU1MTY0ZTBhYTIwZTg4NTVjMWJiODRlOTMxZDM4N2ZkOTUxYjMwMGYyMmMw34xEuQ==: --dhchap-ctrl-secret DHHC-1:03:N2EzYTQwMDYyMWUzOWI3M2U2MDZiOTZkZmQ5YjhmZDgxZjI0ZWZjNzRlNjUyYWEyOWNjYmExZTIwZWFjODhkMtVSbO4=: 00:13:32.436 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.437 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:32.437 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.437 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.437 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.437 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:32.437 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:32.437 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:32.694 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:13:32.694 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:32.694 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:32.694 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:32.694 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:32.694 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.694 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.694 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.694 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.694 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.694 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.694 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.952 00:13:32.952 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:32.952 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:32.952 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.210 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.210 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.210 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.210 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.210 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.210 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:33.210 { 00:13:33.210 "cntlid": 19, 00:13:33.210 "qid": 0, 00:13:33.210 "state": "enabled", 00:13:33.210 "thread": "nvmf_tgt_poll_group_000", 00:13:33.210 "listen_address": { 00:13:33.210 "trtype": "TCP", 00:13:33.210 "adrfam": "IPv4", 00:13:33.210 "traddr": "10.0.0.2", 00:13:33.210 "trsvcid": "4420" 00:13:33.210 }, 00:13:33.210 "peer_address": { 00:13:33.210 "trtype": "TCP", 00:13:33.210 "adrfam": "IPv4", 00:13:33.210 "traddr": "10.0.0.1", 00:13:33.210 "trsvcid": "38336" 00:13:33.210 }, 00:13:33.210 "auth": { 00:13:33.210 "state": "completed", 00:13:33.210 "digest": "sha256", 00:13:33.210 "dhgroup": "ffdhe3072" 00:13:33.210 } 00:13:33.210 } 00:13:33.210 ]' 00:13:33.210 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:33.468 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:33.468 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:33.468 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:33.468 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:33.468 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.468 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.468 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.725 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjZmZWE0M2UzNWE2NmE3MDE4NzE5YWU2MjhmY2Y1OWQUgtom: --dhchap-ctrl-secret DHHC-1:02:OTEzOTdmZWEyYmExMGQ4NWIxNDdmZmU3N2I5ZWZhYTVmN2U0YWNiM2JmNGQ2NjhiOWQYvA==: 00:13:35.099 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.099 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:35.099 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.099 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.099 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.099 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:35.099 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:35.099 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:35.099 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:13:35.099 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:35.099 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:35.099 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:35.099 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:35.099 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.099 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.099 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.099 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.099 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.099 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.099 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.665 00:13:35.665 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:35.665 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:35.665 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.923 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.923 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.923 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.923 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.923 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.923 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:35.923 { 00:13:35.923 "cntlid": 21, 00:13:35.923 "qid": 0, 00:13:35.923 "state": "enabled", 00:13:35.923 "thread": "nvmf_tgt_poll_group_000", 00:13:35.923 "listen_address": { 00:13:35.923 "trtype": "TCP", 00:13:35.923 "adrfam": "IPv4", 00:13:35.923 "traddr": "10.0.0.2", 00:13:35.923 "trsvcid": "4420" 00:13:35.923 }, 00:13:35.923 "peer_address": { 00:13:35.923 "trtype": "TCP", 00:13:35.923 "adrfam": "IPv4", 00:13:35.923 "traddr": "10.0.0.1", 00:13:35.923 "trsvcid": "38356" 00:13:35.923 }, 00:13:35.923 "auth": { 00:13:35.923 "state": "completed", 00:13:35.923 "digest": "sha256", 00:13:35.923 "dhgroup": "ffdhe3072" 00:13:35.923 } 00:13:35.923 } 00:13:35.923 ]' 00:13:35.923 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:35.923 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:35.923 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:35.923 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:35.923 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:35.923 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.923 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.923 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.489 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZTAxMjJjOTYyODRkNTgwMjUwM2VmYmZiMzM1NmJiNWMyY2I3YTlkNjU4ZWNiMDgxls2F4g==: --dhchap-ctrl-secret DHHC-1:01:OWJlZjA5MzlmOWRiNjA0ODZkOGY0NDQxOWU3ZTczZjMmn7gr: 00:13:37.421 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.421 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:37.421 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.421 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.421 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.421 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:37.421 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:37.421 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:37.986 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:13:37.986 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:37.986 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:37.986 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:37.986 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:37.986 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.986 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:13:37.986 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.986 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.986 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.986 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:37.986 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:38.245 00:13:38.245 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:38.245 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:38.245 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.503 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.503 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.503 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.503 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.503 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.503 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:38.503 { 00:13:38.503 "cntlid": 23, 00:13:38.503 "qid": 0, 00:13:38.503 "state": "enabled", 00:13:38.503 "thread": "nvmf_tgt_poll_group_000", 00:13:38.503 "listen_address": { 00:13:38.503 "trtype": "TCP", 00:13:38.503 "adrfam": "IPv4", 00:13:38.503 "traddr": "10.0.0.2", 00:13:38.503 "trsvcid": "4420" 00:13:38.503 }, 00:13:38.503 "peer_address": { 00:13:38.503 "trtype": "TCP", 00:13:38.503 "adrfam": "IPv4", 00:13:38.503 "traddr": "10.0.0.1", 00:13:38.503 "trsvcid": "38390" 00:13:38.503 }, 00:13:38.503 "auth": { 00:13:38.503 "state": "completed", 00:13:38.503 "digest": "sha256", 00:13:38.503 "dhgroup": "ffdhe3072" 00:13:38.503 } 00:13:38.503 } 00:13:38.503 ]' 00:13:38.503 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:38.503 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:38.503 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:38.761 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:38.761 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:38.761 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.761 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.761 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.018 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YzhjOTRjNWYyYTc1MzkyZDllYjBkZjJjZGEwYzU5MmU2YTdiOWU2MGU0ZTYzYWU2MmFkZGRjY2M5MDhmZTFlNLl9NnQ=: 00:13:40.391 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.391 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:40.391 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.391 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.391 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.391 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:40.391 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:40.391 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:40.391 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:40.391 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:13:40.391 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:40.391 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:40.391 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:40.391 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:40.391 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.391 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.391 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.391 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.391 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.391 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.391 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.957 00:13:40.957 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:40.957 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:40.957 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.215 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.215 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.215 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.215 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.215 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.215 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:41.215 { 00:13:41.215 "cntlid": 25, 00:13:41.215 "qid": 0, 00:13:41.215 "state": "enabled", 00:13:41.215 "thread": "nvmf_tgt_poll_group_000", 00:13:41.215 "listen_address": { 00:13:41.215 "trtype": "TCP", 00:13:41.215 "adrfam": "IPv4", 00:13:41.215 "traddr": "10.0.0.2", 00:13:41.215 "trsvcid": "4420" 00:13:41.215 }, 00:13:41.215 "peer_address": { 00:13:41.215 "trtype": "TCP", 00:13:41.215 "adrfam": "IPv4", 00:13:41.215 "traddr": "10.0.0.1", 00:13:41.215 "trsvcid": "36912" 00:13:41.215 }, 00:13:41.215 "auth": { 00:13:41.215 "state": "completed", 00:13:41.215 "digest": "sha256", 00:13:41.215 "dhgroup": "ffdhe4096" 00:13:41.215 } 00:13:41.215 } 00:13:41.215 ]' 00:13:41.215 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:41.215 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:41.215 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:41.215 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:41.215 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:41.215 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.215 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.215 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.780 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:M2U2MmU1MTY0ZTBhYTIwZTg4NTVjMWJiODRlOTMxZDM4N2ZkOTUxYjMwMGYyMmMw34xEuQ==: --dhchap-ctrl-secret DHHC-1:03:N2EzYTQwMDYyMWUzOWI3M2U2MDZiOTZkZmQ5YjhmZDgxZjI0ZWZjNzRlNjUyYWEyOWNjYmExZTIwZWFjODhkMtVSbO4=: 00:13:42.713 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.713 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:42.713 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.713 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.971 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.971 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:42.971 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:42.971 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:43.227 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:13:43.227 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:43.227 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:43.227 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:43.227 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:43.227 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.227 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.227 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.227 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.227 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.227 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.227 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.484 00:13:43.485 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:43.485 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:43.485 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.050 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.050 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.050 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.050 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.050 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.050 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:44.050 { 00:13:44.050 "cntlid": 27, 00:13:44.050 "qid": 0, 00:13:44.050 "state": "enabled", 00:13:44.050 "thread": "nvmf_tgt_poll_group_000", 00:13:44.050 "listen_address": { 00:13:44.050 "trtype": "TCP", 00:13:44.050 "adrfam": "IPv4", 00:13:44.050 "traddr": "10.0.0.2", 00:13:44.050 "trsvcid": "4420" 00:13:44.050 }, 00:13:44.050 "peer_address": { 00:13:44.050 "trtype": "TCP", 00:13:44.050 "adrfam": "IPv4", 00:13:44.050 "traddr": "10.0.0.1", 00:13:44.050 "trsvcid": "36926" 00:13:44.050 }, 00:13:44.050 "auth": { 00:13:44.050 "state": "completed", 00:13:44.050 "digest": "sha256", 00:13:44.050 "dhgroup": "ffdhe4096" 00:13:44.050 } 00:13:44.050 } 00:13:44.050 ]' 00:13:44.050 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:44.050 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:44.050 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:44.050 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:44.050 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:44.050 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.050 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.050 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.308 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjZmZWE0M2UzNWE2NmE3MDE4NzE5YWU2MjhmY2Y1OWQUgtom: --dhchap-ctrl-secret DHHC-1:02:OTEzOTdmZWEyYmExMGQ4NWIxNDdmZmU3N2I5ZWZhYTVmN2U0YWNiM2JmNGQ2NjhiOWQYvA==: 00:13:45.681 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.681 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:45.681 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.681 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.681 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.681 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:45.681 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:45.681 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:45.939 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:13:45.939 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:45.939 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:45.939 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:45.939 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:45.939 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.939 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.939 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.939 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.939 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.939 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.939 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:46.197 00:13:46.197 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:46.197 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:46.197 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.762 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.762 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.762 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.762 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.762 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.762 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:46.762 { 00:13:46.762 "cntlid": 29, 00:13:46.762 "qid": 0, 00:13:46.762 "state": "enabled", 00:13:46.762 "thread": "nvmf_tgt_poll_group_000", 00:13:46.762 "listen_address": { 00:13:46.762 "trtype": "TCP", 00:13:46.762 "adrfam": "IPv4", 00:13:46.762 "traddr": "10.0.0.2", 00:13:46.762 "trsvcid": "4420" 00:13:46.762 }, 00:13:46.762 "peer_address": { 00:13:46.762 "trtype": "TCP", 00:13:46.762 "adrfam": "IPv4", 00:13:46.762 "traddr": "10.0.0.1", 00:13:46.762 "trsvcid": "36954" 00:13:46.762 }, 00:13:46.762 "auth": { 00:13:46.762 "state": "completed", 00:13:46.762 "digest": "sha256", 00:13:46.762 "dhgroup": "ffdhe4096" 00:13:46.762 } 00:13:46.762 } 00:13:46.762 ]' 00:13:46.762 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:46.762 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:46.762 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:46.762 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:46.762 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:46.762 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.762 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.762 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.020 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZTAxMjJjOTYyODRkNTgwMjUwM2VmYmZiMzM1NmJiNWMyY2I3YTlkNjU4ZWNiMDgxls2F4g==: --dhchap-ctrl-secret DHHC-1:01:OWJlZjA5MzlmOWRiNjA0ODZkOGY0NDQxOWU3ZTczZjMmn7gr: 00:13:48.400 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.400 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:48.400 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.400 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.400 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.400 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:48.400 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:48.400 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:48.400 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:13:48.400 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:48.400 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:48.400 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:48.400 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:48.400 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.400 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:13:48.400 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.400 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.400 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.400 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:48.400 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:48.964 00:13:48.964 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:48.964 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:48.964 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.221 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.221 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.221 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.221 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.221 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.221 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:49.221 { 00:13:49.221 "cntlid": 31, 00:13:49.221 "qid": 0, 00:13:49.221 "state": "enabled", 00:13:49.221 "thread": "nvmf_tgt_poll_group_000", 00:13:49.221 "listen_address": { 00:13:49.221 "trtype": "TCP", 00:13:49.221 "adrfam": "IPv4", 00:13:49.221 "traddr": "10.0.0.2", 00:13:49.221 "trsvcid": "4420" 00:13:49.221 }, 00:13:49.221 "peer_address": { 00:13:49.221 "trtype": "TCP", 00:13:49.221 "adrfam": "IPv4", 00:13:49.221 "traddr": "10.0.0.1", 00:13:49.221 "trsvcid": "42966" 00:13:49.221 }, 00:13:49.221 "auth": { 00:13:49.221 "state": "completed", 00:13:49.221 "digest": "sha256", 00:13:49.221 "dhgroup": "ffdhe4096" 00:13:49.221 } 00:13:49.221 } 00:13:49.221 ]' 00:13:49.221 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:49.221 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:49.221 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:49.478 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:49.478 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:49.478 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.478 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.478 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.735 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YzhjOTRjNWYyYTc1MzkyZDllYjBkZjJjZGEwYzU5MmU2YTdiOWU2MGU0ZTYzYWU2MmFkZGRjY2M5MDhmZTFlNLl9NnQ=: 00:13:51.106 19:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.106 19:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:51.106 19:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.106 19:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.106 19:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.106 19:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:51.106 19:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:51.106 19:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:51.106 19:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:51.106 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:13:51.106 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:51.106 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:51.106 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:51.106 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:51.106 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.106 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:51.106 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.106 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.106 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.106 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:51.106 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.039 00:13:52.039 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:52.039 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:52.039 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.039 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.039 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.039 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.039 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.297 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.297 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:52.297 { 00:13:52.297 "cntlid": 33, 00:13:52.297 "qid": 0, 00:13:52.297 "state": "enabled", 00:13:52.297 "thread": "nvmf_tgt_poll_group_000", 00:13:52.297 "listen_address": { 00:13:52.297 "trtype": "TCP", 00:13:52.297 "adrfam": "IPv4", 00:13:52.297 "traddr": "10.0.0.2", 00:13:52.297 "trsvcid": "4420" 00:13:52.297 }, 00:13:52.297 "peer_address": { 00:13:52.297 "trtype": "TCP", 00:13:52.297 "adrfam": "IPv4", 00:13:52.297 "traddr": "10.0.0.1", 00:13:52.297 "trsvcid": "43000" 00:13:52.297 }, 00:13:52.297 "auth": { 00:13:52.297 "state": "completed", 00:13:52.297 "digest": "sha256", 00:13:52.297 "dhgroup": "ffdhe6144" 00:13:52.297 } 00:13:52.297 } 00:13:52.297 ]' 00:13:52.297 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:52.297 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:52.297 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:52.297 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:52.297 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:52.297 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.297 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.297 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.555 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:M2U2MmU1MTY0ZTBhYTIwZTg4NTVjMWJiODRlOTMxZDM4N2ZkOTUxYjMwMGYyMmMw34xEuQ==: --dhchap-ctrl-secret DHHC-1:03:N2EzYTQwMDYyMWUzOWI3M2U2MDZiOTZkZmQ5YjhmZDgxZjI0ZWZjNzRlNjUyYWEyOWNjYmExZTIwZWFjODhkMtVSbO4=: 00:13:53.928 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.928 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:53.928 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.928 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.928 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.928 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:53.928 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:53.928 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:54.186 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:13:54.186 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:54.186 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:54.186 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:54.186 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:54.186 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.186 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.186 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.186 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.186 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.186 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.186 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.751 00:13:54.751 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:54.751 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.751 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:55.011 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.011 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.011 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.011 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.011 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.011 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:55.011 { 00:13:55.011 "cntlid": 35, 00:13:55.011 "qid": 0, 00:13:55.011 "state": "enabled", 00:13:55.011 "thread": "nvmf_tgt_poll_group_000", 00:13:55.011 "listen_address": { 00:13:55.011 "trtype": "TCP", 00:13:55.011 "adrfam": "IPv4", 00:13:55.011 "traddr": "10.0.0.2", 00:13:55.011 "trsvcid": "4420" 00:13:55.011 }, 00:13:55.011 "peer_address": { 00:13:55.011 "trtype": "TCP", 00:13:55.011 "adrfam": "IPv4", 00:13:55.011 "traddr": "10.0.0.1", 00:13:55.011 "trsvcid": "43028" 00:13:55.011 }, 00:13:55.011 "auth": { 00:13:55.011 "state": "completed", 00:13:55.011 "digest": "sha256", 00:13:55.011 "dhgroup": "ffdhe6144" 00:13:55.011 } 00:13:55.011 } 00:13:55.011 ]' 00:13:55.011 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:55.011 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:55.011 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:55.011 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:55.011 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:55.310 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.310 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.310 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.593 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjZmZWE0M2UzNWE2NmE3MDE4NzE5YWU2MjhmY2Y1OWQUgtom: --dhchap-ctrl-secret DHHC-1:02:OTEzOTdmZWEyYmExMGQ4NWIxNDdmZmU3N2I5ZWZhYTVmN2U0YWNiM2JmNGQ2NjhiOWQYvA==: 00:13:56.527 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.527 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:56.527 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.527 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.527 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.527 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:56.527 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:56.527 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:57.093 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:13:57.093 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:57.093 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:57.093 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:57.093 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:57.093 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.093 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.093 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.093 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.093 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.093 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.093 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.659 00:13:57.659 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:57.659 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:57.659 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.917 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.917 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.917 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.917 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.917 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.917 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:57.917 { 00:13:57.917 "cntlid": 37, 00:13:57.917 "qid": 0, 00:13:57.917 "state": "enabled", 00:13:57.917 "thread": "nvmf_tgt_poll_group_000", 00:13:57.917 "listen_address": { 00:13:57.917 "trtype": "TCP", 00:13:57.917 "adrfam": "IPv4", 00:13:57.917 "traddr": "10.0.0.2", 00:13:57.917 "trsvcid": "4420" 00:13:57.917 }, 00:13:57.917 "peer_address": { 00:13:57.917 "trtype": "TCP", 00:13:57.917 "adrfam": "IPv4", 00:13:57.917 "traddr": "10.0.0.1", 00:13:57.917 "trsvcid": "43060" 00:13:57.917 }, 00:13:57.917 "auth": { 00:13:57.917 "state": "completed", 00:13:57.917 "digest": "sha256", 00:13:57.917 "dhgroup": "ffdhe6144" 00:13:57.917 } 00:13:57.917 } 00:13:57.917 ]' 00:13:57.917 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:57.917 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:57.917 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:57.917 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:57.917 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:57.917 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.917 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.917 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.483 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZTAxMjJjOTYyODRkNTgwMjUwM2VmYmZiMzM1NmJiNWMyY2I3YTlkNjU4ZWNiMDgxls2F4g==: --dhchap-ctrl-secret DHHC-1:01:OWJlZjA5MzlmOWRiNjA0ODZkOGY0NDQxOWU3ZTczZjMmn7gr: 00:13:59.417 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.417 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:59.417 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.417 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.417 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.417 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:59.417 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:59.417 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:59.983 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:13:59.983 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:59.983 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:59.983 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:59.983 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:59.983 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.983 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:13:59.983 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.983 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.983 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.983 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:59.983 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:00.548 00:14:00.548 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:00.548 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:00.548 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.805 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.805 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.805 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.805 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.805 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.805 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:00.805 { 00:14:00.805 "cntlid": 39, 00:14:00.805 "qid": 0, 00:14:00.805 "state": "enabled", 00:14:00.805 "thread": "nvmf_tgt_poll_group_000", 00:14:00.805 "listen_address": { 00:14:00.805 "trtype": "TCP", 00:14:00.805 "adrfam": "IPv4", 00:14:00.805 "traddr": "10.0.0.2", 00:14:00.805 "trsvcid": "4420" 00:14:00.805 }, 00:14:00.805 "peer_address": { 00:14:00.805 "trtype": "TCP", 00:14:00.805 "adrfam": "IPv4", 00:14:00.805 "traddr": "10.0.0.1", 00:14:00.805 "trsvcid": "55676" 00:14:00.805 }, 00:14:00.805 "auth": { 00:14:00.805 "state": "completed", 00:14:00.805 "digest": "sha256", 00:14:00.805 "dhgroup": "ffdhe6144" 00:14:00.805 } 00:14:00.805 } 00:14:00.805 ]' 00:14:00.805 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:00.805 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:00.805 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:00.805 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:00.805 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:00.805 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.805 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.805 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.369 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YzhjOTRjNWYyYTc1MzkyZDllYjBkZjJjZGEwYzU5MmU2YTdiOWU2MGU0ZTYzYWU2MmFkZGRjY2M5MDhmZTFlNLl9NnQ=: 00:14:02.301 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.301 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:02.301 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.301 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.301 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.301 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:02.301 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:02.301 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:02.301 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:02.865 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:02.865 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:02.865 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:02.865 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:02.865 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:02.865 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.865 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.865 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.865 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.865 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.865 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.865 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.797 00:14:03.797 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:03.797 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:03.797 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.054 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.054 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.054 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.054 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.054 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.054 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:04.054 { 00:14:04.054 "cntlid": 41, 00:14:04.054 "qid": 0, 00:14:04.054 "state": "enabled", 00:14:04.054 "thread": "nvmf_tgt_poll_group_000", 00:14:04.054 "listen_address": { 00:14:04.054 "trtype": "TCP", 00:14:04.054 "adrfam": "IPv4", 00:14:04.054 "traddr": "10.0.0.2", 00:14:04.054 "trsvcid": "4420" 00:14:04.054 }, 00:14:04.054 "peer_address": { 00:14:04.054 "trtype": "TCP", 00:14:04.054 "adrfam": "IPv4", 00:14:04.054 "traddr": "10.0.0.1", 00:14:04.054 "trsvcid": "55716" 00:14:04.054 }, 00:14:04.054 "auth": { 00:14:04.054 "state": "completed", 00:14:04.054 "digest": "sha256", 00:14:04.054 "dhgroup": "ffdhe8192" 00:14:04.054 } 00:14:04.054 } 00:14:04.054 ]' 00:14:04.054 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:04.054 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:04.054 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:04.054 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:04.054 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:04.054 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.054 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.054 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.620 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:M2U2MmU1MTY0ZTBhYTIwZTg4NTVjMWJiODRlOTMxZDM4N2ZkOTUxYjMwMGYyMmMw34xEuQ==: --dhchap-ctrl-secret DHHC-1:03:N2EzYTQwMDYyMWUzOWI3M2U2MDZiOTZkZmQ5YjhmZDgxZjI0ZWZjNzRlNjUyYWEyOWNjYmExZTIwZWFjODhkMtVSbO4=: 00:14:05.549 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.549 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:05.549 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.549 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.549 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.549 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:05.549 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:05.549 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:06.114 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:06.114 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:06.114 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:06.114 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:06.114 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:06.114 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.114 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.114 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.114 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.114 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.114 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.114 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.046 00:14:07.047 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:07.047 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:07.047 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.304 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.304 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.304 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.304 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.304 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.304 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:07.304 { 00:14:07.304 "cntlid": 43, 00:14:07.304 "qid": 0, 00:14:07.304 "state": "enabled", 00:14:07.304 "thread": "nvmf_tgt_poll_group_000", 00:14:07.304 "listen_address": { 00:14:07.304 "trtype": "TCP", 00:14:07.304 "adrfam": "IPv4", 00:14:07.304 "traddr": "10.0.0.2", 00:14:07.304 "trsvcid": "4420" 00:14:07.304 }, 00:14:07.304 "peer_address": { 00:14:07.304 "trtype": "TCP", 00:14:07.304 "adrfam": "IPv4", 00:14:07.304 "traddr": "10.0.0.1", 00:14:07.304 "trsvcid": "55734" 00:14:07.304 }, 00:14:07.304 "auth": { 00:14:07.304 "state": "completed", 00:14:07.304 "digest": "sha256", 00:14:07.304 "dhgroup": "ffdhe8192" 00:14:07.304 } 00:14:07.304 } 00:14:07.304 ]' 00:14:07.304 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:07.304 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:07.304 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:07.304 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:07.304 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:07.304 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.304 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.304 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.869 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjZmZWE0M2UzNWE2NmE3MDE4NzE5YWU2MjhmY2Y1OWQUgtom: --dhchap-ctrl-secret DHHC-1:02:OTEzOTdmZWEyYmExMGQ4NWIxNDdmZmU3N2I5ZWZhYTVmN2U0YWNiM2JmNGQ2NjhiOWQYvA==: 00:14:08.802 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.802 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:08.802 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.802 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.802 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.802 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:08.802 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:08.802 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:09.061 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:14:09.061 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:09.061 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:09.061 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:09.061 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:09.061 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.061 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.061 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.061 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.061 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.061 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.061 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.434 00:14:10.434 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:10.434 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:10.434 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.434 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.434 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.434 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.434 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.434 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.434 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:10.434 { 00:14:10.434 "cntlid": 45, 00:14:10.434 "qid": 0, 00:14:10.434 "state": "enabled", 00:14:10.434 "thread": "nvmf_tgt_poll_group_000", 00:14:10.434 "listen_address": { 00:14:10.434 "trtype": "TCP", 00:14:10.434 "adrfam": "IPv4", 00:14:10.434 "traddr": "10.0.0.2", 00:14:10.434 "trsvcid": "4420" 00:14:10.434 }, 00:14:10.434 "peer_address": { 00:14:10.434 "trtype": "TCP", 00:14:10.434 "adrfam": "IPv4", 00:14:10.434 "traddr": "10.0.0.1", 00:14:10.434 "trsvcid": "55488" 00:14:10.434 }, 00:14:10.434 "auth": { 00:14:10.434 "state": "completed", 00:14:10.434 "digest": "sha256", 00:14:10.434 "dhgroup": "ffdhe8192" 00:14:10.434 } 00:14:10.434 } 00:14:10.434 ]' 00:14:10.434 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:10.692 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:10.692 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:10.692 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:10.692 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:10.692 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.692 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.692 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.949 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZTAxMjJjOTYyODRkNTgwMjUwM2VmYmZiMzM1NmJiNWMyY2I3YTlkNjU4ZWNiMDgxls2F4g==: --dhchap-ctrl-secret DHHC-1:01:OWJlZjA5MzlmOWRiNjA0ODZkOGY0NDQxOWU3ZTczZjMmn7gr: 00:14:12.321 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.321 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:12.321 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.321 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.321 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.321 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:12.321 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:12.321 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:12.579 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:14:12.579 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:12.579 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:12.579 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:12.579 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:12.579 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.579 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:14:12.579 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.579 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.579 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.579 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:12.579 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:13.513 00:14:13.513 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.513 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:13.513 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.771 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.771 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.771 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.771 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.771 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.771 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:13.771 { 00:14:13.771 "cntlid": 47, 00:14:13.771 "qid": 0, 00:14:13.771 "state": "enabled", 00:14:13.771 "thread": "nvmf_tgt_poll_group_000", 00:14:13.771 "listen_address": { 00:14:13.771 "trtype": "TCP", 00:14:13.771 "adrfam": "IPv4", 00:14:13.771 "traddr": "10.0.0.2", 00:14:13.771 "trsvcid": "4420" 00:14:13.771 }, 00:14:13.771 "peer_address": { 00:14:13.771 "trtype": "TCP", 00:14:13.771 "adrfam": "IPv4", 00:14:13.771 "traddr": "10.0.0.1", 00:14:13.771 "trsvcid": "55524" 00:14:13.771 }, 00:14:13.771 "auth": { 00:14:13.771 "state": "completed", 00:14:13.771 "digest": "sha256", 00:14:13.771 "dhgroup": "ffdhe8192" 00:14:13.771 } 00:14:13.771 } 00:14:13.771 ]' 00:14:13.771 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:13.771 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:13.771 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:14.029 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:14.029 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:14.029 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.029 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.029 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.287 19:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YzhjOTRjNWYyYTc1MzkyZDllYjBkZjJjZGEwYzU5MmU2YTdiOWU2MGU0ZTYzYWU2MmFkZGRjY2M5MDhmZTFlNLl9NnQ=: 00:14:15.662 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.662 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:15.662 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.662 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.662 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.662 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:15.662 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:15.662 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:15.662 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:15.662 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:15.662 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:14:15.662 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:15.662 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:15.662 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:15.662 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:15.662 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.662 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.662 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.662 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.662 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.662 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.662 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.228 00:14:16.228 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:16.228 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:16.228 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.486 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.486 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.486 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.486 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.486 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.486 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:16.486 { 00:14:16.486 "cntlid": 49, 00:14:16.486 "qid": 0, 00:14:16.486 "state": "enabled", 00:14:16.486 "thread": "nvmf_tgt_poll_group_000", 00:14:16.486 "listen_address": { 00:14:16.486 "trtype": "TCP", 00:14:16.486 "adrfam": "IPv4", 00:14:16.486 "traddr": "10.0.0.2", 00:14:16.486 "trsvcid": "4420" 00:14:16.486 }, 00:14:16.486 "peer_address": { 00:14:16.486 "trtype": "TCP", 00:14:16.486 "adrfam": "IPv4", 00:14:16.486 "traddr": "10.0.0.1", 00:14:16.486 "trsvcid": "55546" 00:14:16.486 }, 00:14:16.486 "auth": { 00:14:16.486 "state": "completed", 00:14:16.486 "digest": "sha384", 00:14:16.486 "dhgroup": "null" 00:14:16.486 } 00:14:16.486 } 00:14:16.486 ]' 00:14:16.486 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:16.486 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:16.486 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:16.486 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:16.486 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:16.486 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.486 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.486 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.745 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:M2U2MmU1MTY0ZTBhYTIwZTg4NTVjMWJiODRlOTMxZDM4N2ZkOTUxYjMwMGYyMmMw34xEuQ==: --dhchap-ctrl-secret DHHC-1:03:N2EzYTQwMDYyMWUzOWI3M2U2MDZiOTZkZmQ5YjhmZDgxZjI0ZWZjNzRlNjUyYWEyOWNjYmExZTIwZWFjODhkMtVSbO4=: 00:14:18.118 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.118 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:18.118 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.118 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.118 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.118 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:18.118 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:18.118 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:18.376 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:14:18.376 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:18.376 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:18.376 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:18.376 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:18.376 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.376 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.376 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.376 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.376 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.376 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.376 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.634 00:14:18.634 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:18.634 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:18.634 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.892 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.892 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.892 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.892 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.892 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.892 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:18.892 { 00:14:18.892 "cntlid": 51, 00:14:18.892 "qid": 0, 00:14:18.892 "state": "enabled", 00:14:18.892 "thread": "nvmf_tgt_poll_group_000", 00:14:18.892 "listen_address": { 00:14:18.892 "trtype": "TCP", 00:14:18.892 "adrfam": "IPv4", 00:14:18.892 "traddr": "10.0.0.2", 00:14:18.892 "trsvcid": "4420" 00:14:18.892 }, 00:14:18.892 "peer_address": { 00:14:18.892 "trtype": "TCP", 00:14:18.892 "adrfam": "IPv4", 00:14:18.892 "traddr": "10.0.0.1", 00:14:18.892 "trsvcid": "55572" 00:14:18.892 }, 00:14:18.892 "auth": { 00:14:18.892 "state": "completed", 00:14:18.892 "digest": "sha384", 00:14:18.892 "dhgroup": "null" 00:14:18.892 } 00:14:18.892 } 00:14:18.892 ]' 00:14:18.892 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:19.151 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:19.151 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:19.151 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:19.151 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:19.151 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.151 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.151 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.408 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjZmZWE0M2UzNWE2NmE3MDE4NzE5YWU2MjhmY2Y1OWQUgtom: --dhchap-ctrl-secret DHHC-1:02:OTEzOTdmZWEyYmExMGQ4NWIxNDdmZmU3N2I5ZWZhYTVmN2U0YWNiM2JmNGQ2NjhiOWQYvA==: 00:14:20.878 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.878 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:20.878 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.878 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.878 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.878 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:20.878 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:20.878 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:20.878 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:14:20.878 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:20.878 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:20.878 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:20.878 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:20.878 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.878 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.878 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.878 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.878 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.878 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.879 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.137 00:14:21.137 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:21.137 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:21.137 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.704 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.704 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.704 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.704 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.704 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.704 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:21.704 { 00:14:21.704 "cntlid": 53, 00:14:21.704 "qid": 0, 00:14:21.704 "state": "enabled", 00:14:21.704 "thread": "nvmf_tgt_poll_group_000", 00:14:21.704 "listen_address": { 00:14:21.704 "trtype": "TCP", 00:14:21.704 "adrfam": "IPv4", 00:14:21.704 "traddr": "10.0.0.2", 00:14:21.704 "trsvcid": "4420" 00:14:21.704 }, 00:14:21.704 "peer_address": { 00:14:21.704 "trtype": "TCP", 00:14:21.704 "adrfam": "IPv4", 00:14:21.704 "traddr": "10.0.0.1", 00:14:21.704 "trsvcid": "50638" 00:14:21.704 }, 00:14:21.704 "auth": { 00:14:21.704 "state": "completed", 00:14:21.704 "digest": "sha384", 00:14:21.704 "dhgroup": "null" 00:14:21.704 } 00:14:21.704 } 00:14:21.704 ]' 00:14:21.704 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:21.704 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:21.704 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:21.704 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:21.704 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:21.704 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.704 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.704 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.962 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZTAxMjJjOTYyODRkNTgwMjUwM2VmYmZiMzM1NmJiNWMyY2I3YTlkNjU4ZWNiMDgxls2F4g==: --dhchap-ctrl-secret DHHC-1:01:OWJlZjA5MzlmOWRiNjA0ODZkOGY0NDQxOWU3ZTczZjMmn7gr: 00:14:23.335 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.335 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:23.335 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.335 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.335 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.335 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.335 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:23.335 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:23.593 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:14:23.593 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:23.593 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:23.593 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:23.593 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:23.593 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.593 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:14:23.593 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.593 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.593 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.593 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:23.593 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:23.851 00:14:23.851 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:23.851 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:23.851 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.110 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.110 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.110 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.110 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.110 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.110 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.110 { 00:14:24.110 "cntlid": 55, 00:14:24.110 "qid": 0, 00:14:24.110 "state": "enabled", 00:14:24.110 "thread": "nvmf_tgt_poll_group_000", 00:14:24.110 "listen_address": { 00:14:24.110 "trtype": "TCP", 00:14:24.110 "adrfam": "IPv4", 00:14:24.110 "traddr": "10.0.0.2", 00:14:24.110 "trsvcid": "4420" 00:14:24.110 }, 00:14:24.110 "peer_address": { 00:14:24.110 "trtype": "TCP", 00:14:24.110 "adrfam": "IPv4", 00:14:24.110 "traddr": "10.0.0.1", 00:14:24.110 "trsvcid": "50670" 00:14:24.110 }, 00:14:24.110 "auth": { 00:14:24.110 "state": "completed", 00:14:24.110 "digest": "sha384", 00:14:24.110 "dhgroup": "null" 00:14:24.110 } 00:14:24.110 } 00:14:24.110 ]' 00:14:24.110 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:24.368 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:24.368 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:24.368 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:24.368 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:24.368 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.368 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.368 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.626 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YzhjOTRjNWYyYTc1MzkyZDllYjBkZjJjZGEwYzU5MmU2YTdiOWU2MGU0ZTYzYWU2MmFkZGRjY2M5MDhmZTFlNLl9NnQ=: 00:14:25.999 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.999 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:25.999 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.999 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.999 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.999 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:25.999 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.999 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:25.999 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:25.999 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:14:25.999 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.000 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:26.000 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:26.000 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:26.000 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.000 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.000 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.000 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.000 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.000 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.000 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.566 00:14:26.566 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.566 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.566 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.824 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.824 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.824 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.824 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.824 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.824 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.824 { 00:14:26.824 "cntlid": 57, 00:14:26.824 "qid": 0, 00:14:26.824 "state": "enabled", 00:14:26.824 "thread": "nvmf_tgt_poll_group_000", 00:14:26.824 "listen_address": { 00:14:26.824 "trtype": "TCP", 00:14:26.824 "adrfam": "IPv4", 00:14:26.824 "traddr": "10.0.0.2", 00:14:26.824 "trsvcid": "4420" 00:14:26.824 }, 00:14:26.824 "peer_address": { 00:14:26.824 "trtype": "TCP", 00:14:26.824 "adrfam": "IPv4", 00:14:26.824 "traddr": "10.0.0.1", 00:14:26.824 "trsvcid": "50688" 00:14:26.824 }, 00:14:26.824 "auth": { 00:14:26.824 "state": "completed", 00:14:26.824 "digest": "sha384", 00:14:26.824 "dhgroup": "ffdhe2048" 00:14:26.824 } 00:14:26.824 } 00:14:26.824 ]' 00:14:26.824 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.824 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:26.824 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.824 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:26.824 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.824 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.824 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.824 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.082 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:M2U2MmU1MTY0ZTBhYTIwZTg4NTVjMWJiODRlOTMxZDM4N2ZkOTUxYjMwMGYyMmMw34xEuQ==: --dhchap-ctrl-secret DHHC-1:03:N2EzYTQwMDYyMWUzOWI3M2U2MDZiOTZkZmQ5YjhmZDgxZjI0ZWZjNzRlNjUyYWEyOWNjYmExZTIwZWFjODhkMtVSbO4=: 00:14:28.454 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.454 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:28.454 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.454 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.454 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.454 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:28.454 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:28.454 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:28.712 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:14:28.712 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.712 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:28.712 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:28.712 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:28.712 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.712 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.712 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.712 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.712 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.712 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.712 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.971 00:14:28.971 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.971 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.971 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.536 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.536 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.536 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.536 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.536 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.536 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:29.536 { 00:14:29.536 "cntlid": 59, 00:14:29.536 "qid": 0, 00:14:29.536 "state": "enabled", 00:14:29.536 "thread": "nvmf_tgt_poll_group_000", 00:14:29.536 "listen_address": { 00:14:29.536 "trtype": "TCP", 00:14:29.536 "adrfam": "IPv4", 00:14:29.536 "traddr": "10.0.0.2", 00:14:29.536 "trsvcid": "4420" 00:14:29.536 }, 00:14:29.536 "peer_address": { 00:14:29.536 "trtype": "TCP", 00:14:29.536 "adrfam": "IPv4", 00:14:29.536 "traddr": "10.0.0.1", 00:14:29.536 "trsvcid": "34670" 00:14:29.536 }, 00:14:29.536 "auth": { 00:14:29.536 "state": "completed", 00:14:29.536 "digest": "sha384", 00:14:29.536 "dhgroup": "ffdhe2048" 00:14:29.536 } 00:14:29.536 } 00:14:29.536 ]' 00:14:29.536 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:29.536 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:29.536 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:29.536 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:29.536 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:29.536 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.536 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.536 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.794 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjZmZWE0M2UzNWE2NmE3MDE4NzE5YWU2MjhmY2Y1OWQUgtom: --dhchap-ctrl-secret DHHC-1:02:OTEzOTdmZWEyYmExMGQ4NWIxNDdmZmU3N2I5ZWZhYTVmN2U0YWNiM2JmNGQ2NjhiOWQYvA==: 00:14:31.167 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.167 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:31.167 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.167 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.167 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.167 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.167 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:31.168 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:31.168 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:14:31.168 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:31.168 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:31.168 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:31.168 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:31.168 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.168 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.168 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.168 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.426 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.426 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.426 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.684 00:14:31.684 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:31.684 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:31.684 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.942 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.942 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.942 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.942 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.942 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.942 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:31.942 { 00:14:31.942 "cntlid": 61, 00:14:31.942 "qid": 0, 00:14:31.943 "state": "enabled", 00:14:31.943 "thread": "nvmf_tgt_poll_group_000", 00:14:31.943 "listen_address": { 00:14:31.943 "trtype": "TCP", 00:14:31.943 "adrfam": "IPv4", 00:14:31.943 "traddr": "10.0.0.2", 00:14:31.943 "trsvcid": "4420" 00:14:31.943 }, 00:14:31.943 "peer_address": { 00:14:31.943 "trtype": "TCP", 00:14:31.943 "adrfam": "IPv4", 00:14:31.943 "traddr": "10.0.0.1", 00:14:31.943 "trsvcid": "34696" 00:14:31.943 }, 00:14:31.943 "auth": { 00:14:31.943 "state": "completed", 00:14:31.943 "digest": "sha384", 00:14:31.943 "dhgroup": "ffdhe2048" 00:14:31.943 } 00:14:31.943 } 00:14:31.943 ]' 00:14:31.943 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:31.943 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:31.943 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:32.200 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:32.200 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:32.200 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.200 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.200 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.458 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZTAxMjJjOTYyODRkNTgwMjUwM2VmYmZiMzM1NmJiNWMyY2I3YTlkNjU4ZWNiMDgxls2F4g==: --dhchap-ctrl-secret DHHC-1:01:OWJlZjA5MzlmOWRiNjA0ODZkOGY0NDQxOWU3ZTczZjMmn7gr: 00:14:33.832 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.832 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:33.832 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.832 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.832 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.832 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.832 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:33.832 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:33.832 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:14:33.832 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:33.832 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:33.832 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:33.832 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:33.832 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.832 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:14:33.832 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.832 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.832 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.832 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:33.832 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:34.398 00:14:34.398 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.398 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.398 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.398 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.398 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.398 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.398 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.398 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.398 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.398 { 00:14:34.398 "cntlid": 63, 00:14:34.398 "qid": 0, 00:14:34.398 "state": "enabled", 00:14:34.398 "thread": "nvmf_tgt_poll_group_000", 00:14:34.398 "listen_address": { 00:14:34.398 "trtype": "TCP", 00:14:34.398 "adrfam": "IPv4", 00:14:34.398 "traddr": "10.0.0.2", 00:14:34.398 "trsvcid": "4420" 00:14:34.398 }, 00:14:34.398 "peer_address": { 00:14:34.398 "trtype": "TCP", 00:14:34.398 "adrfam": "IPv4", 00:14:34.398 "traddr": "10.0.0.1", 00:14:34.398 "trsvcid": "34728" 00:14:34.398 }, 00:14:34.398 "auth": { 00:14:34.398 "state": "completed", 00:14:34.398 "digest": "sha384", 00:14:34.398 "dhgroup": "ffdhe2048" 00:14:34.398 } 00:14:34.398 } 00:14:34.398 ]' 00:14:34.398 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.655 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:34.655 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:34.655 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:34.655 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:34.655 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.655 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.655 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.912 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YzhjOTRjNWYyYTc1MzkyZDllYjBkZjJjZGEwYzU5MmU2YTdiOWU2MGU0ZTYzYWU2MmFkZGRjY2M5MDhmZTFlNLl9NnQ=: 00:14:36.283 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.283 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:36.283 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.283 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.283 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.283 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:36.283 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.283 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:36.283 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:36.283 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:14:36.283 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.283 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:36.283 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:36.283 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:36.283 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.284 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.284 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.284 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.284 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.284 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.284 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.848 00:14:36.848 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:36.848 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.848 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:37.106 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.106 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.106 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.106 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.106 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.106 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.106 { 00:14:37.106 "cntlid": 65, 00:14:37.106 "qid": 0, 00:14:37.106 "state": "enabled", 00:14:37.106 "thread": "nvmf_tgt_poll_group_000", 00:14:37.106 "listen_address": { 00:14:37.106 "trtype": "TCP", 00:14:37.106 "adrfam": "IPv4", 00:14:37.106 "traddr": "10.0.0.2", 00:14:37.106 "trsvcid": "4420" 00:14:37.106 }, 00:14:37.106 "peer_address": { 00:14:37.106 "trtype": "TCP", 00:14:37.106 "adrfam": "IPv4", 00:14:37.106 "traddr": "10.0.0.1", 00:14:37.106 "trsvcid": "34744" 00:14:37.106 }, 00:14:37.106 "auth": { 00:14:37.106 "state": "completed", 00:14:37.106 "digest": "sha384", 00:14:37.106 "dhgroup": "ffdhe3072" 00:14:37.106 } 00:14:37.106 } 00:14:37.106 ]' 00:14:37.106 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.106 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:37.106 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.106 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:37.106 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.106 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.106 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.106 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.670 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:M2U2MmU1MTY0ZTBhYTIwZTg4NTVjMWJiODRlOTMxZDM4N2ZkOTUxYjMwMGYyMmMw34xEuQ==: --dhchap-ctrl-secret DHHC-1:03:N2EzYTQwMDYyMWUzOWI3M2U2MDZiOTZkZmQ5YjhmZDgxZjI0ZWZjNzRlNjUyYWEyOWNjYmExZTIwZWFjODhkMtVSbO4=: 00:14:38.602 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.867 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:38.867 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.867 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.867 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.867 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.867 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:38.867 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:39.128 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:14:39.128 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:39.128 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:39.128 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:39.128 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:39.128 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.128 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.128 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.128 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.128 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.128 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.128 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.385 00:14:39.385 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.385 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.385 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.643 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.643 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.643 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.643 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.643 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.643 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.643 { 00:14:39.643 "cntlid": 67, 00:14:39.643 "qid": 0, 00:14:39.643 "state": "enabled", 00:14:39.643 "thread": "nvmf_tgt_poll_group_000", 00:14:39.643 "listen_address": { 00:14:39.643 "trtype": "TCP", 00:14:39.643 "adrfam": "IPv4", 00:14:39.643 "traddr": "10.0.0.2", 00:14:39.643 "trsvcid": "4420" 00:14:39.643 }, 00:14:39.643 "peer_address": { 00:14:39.643 "trtype": "TCP", 00:14:39.643 "adrfam": "IPv4", 00:14:39.643 "traddr": "10.0.0.1", 00:14:39.643 "trsvcid": "36218" 00:14:39.643 }, 00:14:39.643 "auth": { 00:14:39.643 "state": "completed", 00:14:39.643 "digest": "sha384", 00:14:39.643 "dhgroup": "ffdhe3072" 00:14:39.643 } 00:14:39.643 } 00:14:39.643 ]' 00:14:39.643 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.900 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:39.900 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.900 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:39.900 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.900 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.900 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.900 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.157 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjZmZWE0M2UzNWE2NmE3MDE4NzE5YWU2MjhmY2Y1OWQUgtom: --dhchap-ctrl-secret DHHC-1:02:OTEzOTdmZWEyYmExMGQ4NWIxNDdmZmU3N2I5ZWZhYTVmN2U0YWNiM2JmNGQ2NjhiOWQYvA==: 00:14:41.529 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.529 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:41.529 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.529 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.529 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.529 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:41.529 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:41.529 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:41.786 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:14:41.786 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.786 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:41.786 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:41.786 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:41.786 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.786 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.786 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.786 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.786 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.786 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.786 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.043 00:14:42.043 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.043 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.043 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.301 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.301 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.301 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.301 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.301 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.301 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.301 { 00:14:42.301 "cntlid": 69, 00:14:42.301 "qid": 0, 00:14:42.301 "state": "enabled", 00:14:42.301 "thread": "nvmf_tgt_poll_group_000", 00:14:42.301 "listen_address": { 00:14:42.301 "trtype": "TCP", 00:14:42.301 "adrfam": "IPv4", 00:14:42.301 "traddr": "10.0.0.2", 00:14:42.301 "trsvcid": "4420" 00:14:42.301 }, 00:14:42.301 "peer_address": { 00:14:42.301 "trtype": "TCP", 00:14:42.301 "adrfam": "IPv4", 00:14:42.301 "traddr": "10.0.0.1", 00:14:42.301 "trsvcid": "36256" 00:14:42.301 }, 00:14:42.301 "auth": { 00:14:42.301 "state": "completed", 00:14:42.301 "digest": "sha384", 00:14:42.301 "dhgroup": "ffdhe3072" 00:14:42.301 } 00:14:42.301 } 00:14:42.301 ]' 00:14:42.301 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.559 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:42.559 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.559 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:42.559 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.559 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.559 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.559 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.816 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZTAxMjJjOTYyODRkNTgwMjUwM2VmYmZiMzM1NmJiNWMyY2I3YTlkNjU4ZWNiMDgxls2F4g==: --dhchap-ctrl-secret DHHC-1:01:OWJlZjA5MzlmOWRiNjA0ODZkOGY0NDQxOWU3ZTczZjMmn7gr: 00:14:44.219 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.219 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:44.219 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.219 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.219 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.219 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:44.219 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:44.219 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:44.219 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:14:44.219 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:44.219 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:44.219 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:44.219 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:44.219 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.219 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:14:44.219 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.219 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.219 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.219 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:44.219 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:44.800 00:14:44.800 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:44.800 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.800 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.058 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.058 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.058 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.058 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.058 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.058 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.058 { 00:14:45.058 "cntlid": 71, 00:14:45.058 "qid": 0, 00:14:45.058 "state": "enabled", 00:14:45.058 "thread": "nvmf_tgt_poll_group_000", 00:14:45.058 "listen_address": { 00:14:45.058 "trtype": "TCP", 00:14:45.058 "adrfam": "IPv4", 00:14:45.058 "traddr": "10.0.0.2", 00:14:45.058 "trsvcid": "4420" 00:14:45.058 }, 00:14:45.058 "peer_address": { 00:14:45.058 "trtype": "TCP", 00:14:45.058 "adrfam": "IPv4", 00:14:45.058 "traddr": "10.0.0.1", 00:14:45.058 "trsvcid": "36272" 00:14:45.058 }, 00:14:45.058 "auth": { 00:14:45.058 "state": "completed", 00:14:45.058 "digest": "sha384", 00:14:45.058 "dhgroup": "ffdhe3072" 00:14:45.058 } 00:14:45.058 } 00:14:45.058 ]' 00:14:45.058 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.058 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:45.058 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.058 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:45.058 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.058 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.058 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.058 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.624 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YzhjOTRjNWYyYTc1MzkyZDllYjBkZjJjZGEwYzU5MmU2YTdiOWU2MGU0ZTYzYWU2MmFkZGRjY2M5MDhmZTFlNLl9NnQ=: 00:14:46.557 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.558 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:46.558 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.558 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.558 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.558 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:46.558 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.558 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:46.558 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:47.124 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:14:47.124 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:47.124 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:47.124 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:47.124 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:47.124 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.124 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.124 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.124 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.124 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.124 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.124 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.382 00:14:47.382 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.382 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.382 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.640 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.640 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.640 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.640 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.640 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.640 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:47.640 { 00:14:47.640 "cntlid": 73, 00:14:47.640 "qid": 0, 00:14:47.640 "state": "enabled", 00:14:47.640 "thread": "nvmf_tgt_poll_group_000", 00:14:47.640 "listen_address": { 00:14:47.640 "trtype": "TCP", 00:14:47.640 "adrfam": "IPv4", 00:14:47.640 "traddr": "10.0.0.2", 00:14:47.640 "trsvcid": "4420" 00:14:47.640 }, 00:14:47.640 "peer_address": { 00:14:47.640 "trtype": "TCP", 00:14:47.640 "adrfam": "IPv4", 00:14:47.640 "traddr": "10.0.0.1", 00:14:47.640 "trsvcid": "36304" 00:14:47.640 }, 00:14:47.640 "auth": { 00:14:47.640 "state": "completed", 00:14:47.640 "digest": "sha384", 00:14:47.640 "dhgroup": "ffdhe4096" 00:14:47.640 } 00:14:47.640 } 00:14:47.640 ]' 00:14:47.640 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:47.898 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:47.898 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:47.898 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:47.898 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:47.898 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.898 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.898 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.155 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:M2U2MmU1MTY0ZTBhYTIwZTg4NTVjMWJiODRlOTMxZDM4N2ZkOTUxYjMwMGYyMmMw34xEuQ==: --dhchap-ctrl-secret DHHC-1:03:N2EzYTQwMDYyMWUzOWI3M2U2MDZiOTZkZmQ5YjhmZDgxZjI0ZWZjNzRlNjUyYWEyOWNjYmExZTIwZWFjODhkMtVSbO4=: 00:14:49.529 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.529 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:49.529 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.529 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.529 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.529 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.529 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:49.529 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:49.529 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:14:49.529 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.529 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:49.529 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:49.529 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:49.529 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.529 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.529 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.529 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.529 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.529 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.529 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.095 00:14:50.095 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.095 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.095 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.353 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.353 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.353 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.353 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.353 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.353 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.353 { 00:14:50.353 "cntlid": 75, 00:14:50.353 "qid": 0, 00:14:50.353 "state": "enabled", 00:14:50.353 "thread": "nvmf_tgt_poll_group_000", 00:14:50.353 "listen_address": { 00:14:50.353 "trtype": "TCP", 00:14:50.353 "adrfam": "IPv4", 00:14:50.353 "traddr": "10.0.0.2", 00:14:50.353 "trsvcid": "4420" 00:14:50.353 }, 00:14:50.353 "peer_address": { 00:14:50.353 "trtype": "TCP", 00:14:50.353 "adrfam": "IPv4", 00:14:50.353 "traddr": "10.0.0.1", 00:14:50.353 "trsvcid": "49588" 00:14:50.353 }, 00:14:50.353 "auth": { 00:14:50.353 "state": "completed", 00:14:50.353 "digest": "sha384", 00:14:50.353 "dhgroup": "ffdhe4096" 00:14:50.353 } 00:14:50.353 } 00:14:50.353 ]' 00:14:50.353 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.353 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:50.353 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.611 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:50.611 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.611 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.611 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.611 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.869 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjZmZWE0M2UzNWE2NmE3MDE4NzE5YWU2MjhmY2Y1OWQUgtom: --dhchap-ctrl-secret DHHC-1:02:OTEzOTdmZWEyYmExMGQ4NWIxNDdmZmU3N2I5ZWZhYTVmN2U0YWNiM2JmNGQ2NjhiOWQYvA==: 00:14:52.242 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.242 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:52.242 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.242 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.242 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.243 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:52.243 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:52.243 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:52.243 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:14:52.243 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.243 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:52.243 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:52.243 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:52.243 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.243 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.243 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.243 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.243 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.243 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.243 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.809 00:14:52.809 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.809 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.809 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.067 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.067 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.067 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.067 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.067 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.067 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:53.067 { 00:14:53.067 "cntlid": 77, 00:14:53.067 "qid": 0, 00:14:53.067 "state": "enabled", 00:14:53.067 "thread": "nvmf_tgt_poll_group_000", 00:14:53.067 "listen_address": { 00:14:53.067 "trtype": "TCP", 00:14:53.067 "adrfam": "IPv4", 00:14:53.067 "traddr": "10.0.0.2", 00:14:53.067 "trsvcid": "4420" 00:14:53.067 }, 00:14:53.067 "peer_address": { 00:14:53.067 "trtype": "TCP", 00:14:53.067 "adrfam": "IPv4", 00:14:53.067 "traddr": "10.0.0.1", 00:14:53.067 "trsvcid": "49620" 00:14:53.067 }, 00:14:53.067 "auth": { 00:14:53.067 "state": "completed", 00:14:53.067 "digest": "sha384", 00:14:53.067 "dhgroup": "ffdhe4096" 00:14:53.067 } 00:14:53.067 } 00:14:53.067 ]' 00:14:53.067 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:53.067 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:53.067 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:53.067 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:53.067 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:53.067 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.067 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.067 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.633 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZTAxMjJjOTYyODRkNTgwMjUwM2VmYmZiMzM1NmJiNWMyY2I3YTlkNjU4ZWNiMDgxls2F4g==: --dhchap-ctrl-secret DHHC-1:01:OWJlZjA5MzlmOWRiNjA0ODZkOGY0NDQxOWU3ZTczZjMmn7gr: 00:14:54.567 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.567 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:54.567 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.567 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.567 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.567 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.567 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:54.567 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:55.132 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:14:55.132 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.132 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:55.132 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:55.132 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:55.132 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.132 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:14:55.132 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.132 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.132 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.132 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:55.132 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:55.390 00:14:55.390 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:55.390 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.390 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:55.647 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.647 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.647 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.647 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.647 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.647 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:55.647 { 00:14:55.647 "cntlid": 79, 00:14:55.647 "qid": 0, 00:14:55.647 "state": "enabled", 00:14:55.647 "thread": "nvmf_tgt_poll_group_000", 00:14:55.647 "listen_address": { 00:14:55.647 "trtype": "TCP", 00:14:55.647 "adrfam": "IPv4", 00:14:55.647 "traddr": "10.0.0.2", 00:14:55.647 "trsvcid": "4420" 00:14:55.647 }, 00:14:55.647 "peer_address": { 00:14:55.647 "trtype": "TCP", 00:14:55.647 "adrfam": "IPv4", 00:14:55.647 "traddr": "10.0.0.1", 00:14:55.647 "trsvcid": "49666" 00:14:55.647 }, 00:14:55.647 "auth": { 00:14:55.647 "state": "completed", 00:14:55.647 "digest": "sha384", 00:14:55.647 "dhgroup": "ffdhe4096" 00:14:55.647 } 00:14:55.647 } 00:14:55.647 ]' 00:14:55.647 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:55.904 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:55.904 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:55.904 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:55.904 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:55.904 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.904 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.904 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.161 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YzhjOTRjNWYyYTc1MzkyZDllYjBkZjJjZGEwYzU5MmU2YTdiOWU2MGU0ZTYzYWU2MmFkZGRjY2M5MDhmZTFlNLl9NnQ=: 00:14:57.533 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.533 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:57.533 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.533 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.533 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.533 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:57.533 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.533 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:57.533 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:57.533 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:14:57.533 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.533 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:57.533 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:57.533 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:57.533 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.533 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.533 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.533 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.533 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.533 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.533 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.465 00:14:58.465 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:58.465 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.465 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.723 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.723 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.723 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.723 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.723 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.723 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.723 { 00:14:58.723 "cntlid": 81, 00:14:58.723 "qid": 0, 00:14:58.723 "state": "enabled", 00:14:58.723 "thread": "nvmf_tgt_poll_group_000", 00:14:58.723 "listen_address": { 00:14:58.723 "trtype": "TCP", 00:14:58.723 "adrfam": "IPv4", 00:14:58.723 "traddr": "10.0.0.2", 00:14:58.723 "trsvcid": "4420" 00:14:58.723 }, 00:14:58.723 "peer_address": { 00:14:58.723 "trtype": "TCP", 00:14:58.723 "adrfam": "IPv4", 00:14:58.723 "traddr": "10.0.0.1", 00:14:58.723 "trsvcid": "49702" 00:14:58.723 }, 00:14:58.723 "auth": { 00:14:58.723 "state": "completed", 00:14:58.723 "digest": "sha384", 00:14:58.723 "dhgroup": "ffdhe6144" 00:14:58.723 } 00:14:58.723 } 00:14:58.723 ]' 00:14:58.723 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.723 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:58.723 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.723 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:58.723 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.723 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.723 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.723 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.981 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:M2U2MmU1MTY0ZTBhYTIwZTg4NTVjMWJiODRlOTMxZDM4N2ZkOTUxYjMwMGYyMmMw34xEuQ==: --dhchap-ctrl-secret DHHC-1:03:N2EzYTQwMDYyMWUzOWI3M2U2MDZiOTZkZmQ5YjhmZDgxZjI0ZWZjNzRlNjUyYWEyOWNjYmExZTIwZWFjODhkMtVSbO4=: 00:15:00.354 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.355 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:00.355 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.355 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.355 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.355 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:00.355 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:00.355 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:00.613 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:00.613 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:00.613 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:00.613 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:00.613 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:00.613 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.613 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.613 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.613 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.613 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.613 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.613 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.179 00:15:01.179 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.179 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.179 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.438 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.438 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.438 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.438 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.438 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.438 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:01.438 { 00:15:01.438 "cntlid": 83, 00:15:01.438 "qid": 0, 00:15:01.438 "state": "enabled", 00:15:01.438 "thread": "nvmf_tgt_poll_group_000", 00:15:01.438 "listen_address": { 00:15:01.438 "trtype": "TCP", 00:15:01.438 "adrfam": "IPv4", 00:15:01.438 "traddr": "10.0.0.2", 00:15:01.438 "trsvcid": "4420" 00:15:01.438 }, 00:15:01.438 "peer_address": { 00:15:01.438 "trtype": "TCP", 00:15:01.438 "adrfam": "IPv4", 00:15:01.438 "traddr": "10.0.0.1", 00:15:01.438 "trsvcid": "54784" 00:15:01.438 }, 00:15:01.438 "auth": { 00:15:01.438 "state": "completed", 00:15:01.438 "digest": "sha384", 00:15:01.438 "dhgroup": "ffdhe6144" 00:15:01.438 } 00:15:01.438 } 00:15:01.438 ]' 00:15:01.438 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:01.438 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:01.438 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.696 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:01.696 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.696 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.696 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.696 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.955 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjZmZWE0M2UzNWE2NmE3MDE4NzE5YWU2MjhmY2Y1OWQUgtom: --dhchap-ctrl-secret DHHC-1:02:OTEzOTdmZWEyYmExMGQ4NWIxNDdmZmU3N2I5ZWZhYTVmN2U0YWNiM2JmNGQ2NjhiOWQYvA==: 00:15:03.329 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.329 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:03.329 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.329 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.329 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.329 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:03.330 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:03.330 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:03.330 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:03.330 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:03.330 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:03.330 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:03.330 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:03.330 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.330 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.330 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.330 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.330 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.330 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.330 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.264 00:15:04.264 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.264 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.264 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.264 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.264 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.264 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.264 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.264 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.264 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.264 { 00:15:04.264 "cntlid": 85, 00:15:04.264 "qid": 0, 00:15:04.264 "state": "enabled", 00:15:04.264 "thread": "nvmf_tgt_poll_group_000", 00:15:04.264 "listen_address": { 00:15:04.264 "trtype": "TCP", 00:15:04.264 "adrfam": "IPv4", 00:15:04.264 "traddr": "10.0.0.2", 00:15:04.264 "trsvcid": "4420" 00:15:04.264 }, 00:15:04.264 "peer_address": { 00:15:04.264 "trtype": "TCP", 00:15:04.264 "adrfam": "IPv4", 00:15:04.264 "traddr": "10.0.0.1", 00:15:04.264 "trsvcid": "54824" 00:15:04.264 }, 00:15:04.264 "auth": { 00:15:04.264 "state": "completed", 00:15:04.264 "digest": "sha384", 00:15:04.264 "dhgroup": "ffdhe6144" 00:15:04.264 } 00:15:04.264 } 00:15:04.264 ]' 00:15:04.264 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.522 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:04.522 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:04.522 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:04.522 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:04.522 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.522 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.522 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.780 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZTAxMjJjOTYyODRkNTgwMjUwM2VmYmZiMzM1NmJiNWMyY2I3YTlkNjU4ZWNiMDgxls2F4g==: --dhchap-ctrl-secret DHHC-1:01:OWJlZjA5MzlmOWRiNjA0ODZkOGY0NDQxOWU3ZTczZjMmn7gr: 00:15:06.156 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.156 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:06.156 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.156 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.156 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.156 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:06.156 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:06.156 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:06.414 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:06.414 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.414 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:06.414 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:06.414 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:06.414 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.414 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:15:06.415 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.415 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.415 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.415 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:06.415 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:06.983 00:15:06.983 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:06.983 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:06.983 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.242 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.242 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.242 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.242 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.242 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.242 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.242 { 00:15:07.242 "cntlid": 87, 00:15:07.242 "qid": 0, 00:15:07.242 "state": "enabled", 00:15:07.242 "thread": "nvmf_tgt_poll_group_000", 00:15:07.242 "listen_address": { 00:15:07.242 "trtype": "TCP", 00:15:07.242 "adrfam": "IPv4", 00:15:07.242 "traddr": "10.0.0.2", 00:15:07.242 "trsvcid": "4420" 00:15:07.242 }, 00:15:07.242 "peer_address": { 00:15:07.242 "trtype": "TCP", 00:15:07.242 "adrfam": "IPv4", 00:15:07.242 "traddr": "10.0.0.1", 00:15:07.242 "trsvcid": "54860" 00:15:07.242 }, 00:15:07.242 "auth": { 00:15:07.242 "state": "completed", 00:15:07.242 "digest": "sha384", 00:15:07.242 "dhgroup": "ffdhe6144" 00:15:07.242 } 00:15:07.242 } 00:15:07.242 ]' 00:15:07.242 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.242 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:07.242 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.242 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:07.242 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.500 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.500 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.500 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.784 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YzhjOTRjNWYyYTc1MzkyZDllYjBkZjJjZGEwYzU5MmU2YTdiOWU2MGU0ZTYzYWU2MmFkZGRjY2M5MDhmZTFlNLl9NnQ=: 00:15:08.722 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.722 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:08.722 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.722 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.980 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.980 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:08.980 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.980 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:08.980 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:09.238 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:09.238 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.238 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:09.238 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:09.238 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:09.238 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.238 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.238 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.238 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.238 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.238 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.238 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.171 00:15:10.171 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:10.171 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:10.171 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.428 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.428 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.428 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.428 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.428 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.428 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.428 { 00:15:10.428 "cntlid": 89, 00:15:10.428 "qid": 0, 00:15:10.428 "state": "enabled", 00:15:10.428 "thread": "nvmf_tgt_poll_group_000", 00:15:10.428 "listen_address": { 00:15:10.428 "trtype": "TCP", 00:15:10.428 "adrfam": "IPv4", 00:15:10.428 "traddr": "10.0.0.2", 00:15:10.428 "trsvcid": "4420" 00:15:10.428 }, 00:15:10.428 "peer_address": { 00:15:10.428 "trtype": "TCP", 00:15:10.428 "adrfam": "IPv4", 00:15:10.428 "traddr": "10.0.0.1", 00:15:10.428 "trsvcid": "49244" 00:15:10.428 }, 00:15:10.428 "auth": { 00:15:10.428 "state": "completed", 00:15:10.428 "digest": "sha384", 00:15:10.428 "dhgroup": "ffdhe8192" 00:15:10.428 } 00:15:10.428 } 00:15:10.428 ]' 00:15:10.428 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.428 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.428 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.686 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:10.686 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.686 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.686 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.686 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.944 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:M2U2MmU1MTY0ZTBhYTIwZTg4NTVjMWJiODRlOTMxZDM4N2ZkOTUxYjMwMGYyMmMw34xEuQ==: --dhchap-ctrl-secret DHHC-1:03:N2EzYTQwMDYyMWUzOWI3M2U2MDZiOTZkZmQ5YjhmZDgxZjI0ZWZjNzRlNjUyYWEyOWNjYmExZTIwZWFjODhkMtVSbO4=: 00:15:12.316 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.316 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:12.316 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.316 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.316 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.316 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.316 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:12.316 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:12.316 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:12.316 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:12.316 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:12.316 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:12.316 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:12.316 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.316 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.316 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.316 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.316 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.316 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.316 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.689 00:15:13.689 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:13.689 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:13.689 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.689 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.689 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.689 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.689 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.689 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.689 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:13.689 { 00:15:13.689 "cntlid": 91, 00:15:13.689 "qid": 0, 00:15:13.689 "state": "enabled", 00:15:13.689 "thread": "nvmf_tgt_poll_group_000", 00:15:13.689 "listen_address": { 00:15:13.689 "trtype": "TCP", 00:15:13.689 "adrfam": "IPv4", 00:15:13.689 "traddr": "10.0.0.2", 00:15:13.689 "trsvcid": "4420" 00:15:13.689 }, 00:15:13.689 "peer_address": { 00:15:13.689 "trtype": "TCP", 00:15:13.689 "adrfam": "IPv4", 00:15:13.689 "traddr": "10.0.0.1", 00:15:13.689 "trsvcid": "49258" 00:15:13.689 }, 00:15:13.689 "auth": { 00:15:13.689 "state": "completed", 00:15:13.689 "digest": "sha384", 00:15:13.689 "dhgroup": "ffdhe8192" 00:15:13.689 } 00:15:13.689 } 00:15:13.689 ]' 00:15:13.689 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:13.690 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:13.690 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.947 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:13.947 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:13.947 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.947 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.947 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.205 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjZmZWE0M2UzNWE2NmE3MDE4NzE5YWU2MjhmY2Y1OWQUgtom: --dhchap-ctrl-secret DHHC-1:02:OTEzOTdmZWEyYmExMGQ4NWIxNDdmZmU3N2I5ZWZhYTVmN2U0YWNiM2JmNGQ2NjhiOWQYvA==: 00:15:15.579 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.579 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:15.579 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.579 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.579 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.579 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.579 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:15.579 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:15.579 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:15.579 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:15.579 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:15.579 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:15.579 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:15.579 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.579 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.579 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.579 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.579 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.579 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.579 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.953 00:15:16.953 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.953 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.953 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.953 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.953 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.953 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.953 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.953 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.953 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.953 { 00:15:16.953 "cntlid": 93, 00:15:16.953 "qid": 0, 00:15:16.953 "state": "enabled", 00:15:16.953 "thread": "nvmf_tgt_poll_group_000", 00:15:16.953 "listen_address": { 00:15:16.953 "trtype": "TCP", 00:15:16.953 "adrfam": "IPv4", 00:15:16.953 "traddr": "10.0.0.2", 00:15:16.953 "trsvcid": "4420" 00:15:16.953 }, 00:15:16.953 "peer_address": { 00:15:16.953 "trtype": "TCP", 00:15:16.953 "adrfam": "IPv4", 00:15:16.953 "traddr": "10.0.0.1", 00:15:16.953 "trsvcid": "49288" 00:15:16.953 }, 00:15:16.953 "auth": { 00:15:16.953 "state": "completed", 00:15:16.953 "digest": "sha384", 00:15:16.953 "dhgroup": "ffdhe8192" 00:15:16.953 } 00:15:16.953 } 00:15:16.953 ]' 00:15:16.953 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.953 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.953 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:17.211 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:17.211 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:17.211 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.211 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.211 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.469 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZTAxMjJjOTYyODRkNTgwMjUwM2VmYmZiMzM1NmJiNWMyY2I3YTlkNjU4ZWNiMDgxls2F4g==: --dhchap-ctrl-secret DHHC-1:01:OWJlZjA5MzlmOWRiNjA0ODZkOGY0NDQxOWU3ZTczZjMmn7gr: 00:15:18.843 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.843 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:18.843 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.843 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.843 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.843 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:18.843 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:18.843 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:18.843 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:18.843 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.843 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:18.843 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:18.843 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:18.843 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.843 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:15:18.843 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.843 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.843 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.843 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:18.843 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:20.217 00:15:20.217 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.217 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.217 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.217 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.217 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.217 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.217 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.217 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.217 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.217 { 00:15:20.217 "cntlid": 95, 00:15:20.217 "qid": 0, 00:15:20.217 "state": "enabled", 00:15:20.217 "thread": "nvmf_tgt_poll_group_000", 00:15:20.217 "listen_address": { 00:15:20.217 "trtype": "TCP", 00:15:20.217 "adrfam": "IPv4", 00:15:20.217 "traddr": "10.0.0.2", 00:15:20.217 "trsvcid": "4420" 00:15:20.217 }, 00:15:20.217 "peer_address": { 00:15:20.217 "trtype": "TCP", 00:15:20.217 "adrfam": "IPv4", 00:15:20.217 "traddr": "10.0.0.1", 00:15:20.217 "trsvcid": "34044" 00:15:20.217 }, 00:15:20.217 "auth": { 00:15:20.217 "state": "completed", 00:15:20.217 "digest": "sha384", 00:15:20.217 "dhgroup": "ffdhe8192" 00:15:20.217 } 00:15:20.217 } 00:15:20.217 ]' 00:15:20.217 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.217 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.217 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.475 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:20.475 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:20.475 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.475 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.475 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.733 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YzhjOTRjNWYyYTc1MzkyZDllYjBkZjJjZGEwYzU5MmU2YTdiOWU2MGU0ZTYzYWU2MmFkZGRjY2M5MDhmZTFlNLl9NnQ=: 00:15:22.106 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.106 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:22.106 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.106 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.106 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.106 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:22.106 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:22.106 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.106 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:22.106 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:22.106 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:22.106 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.106 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:22.106 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:22.106 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:22.106 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.106 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.106 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.106 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.106 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.106 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.106 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.671 00:15:22.671 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:22.671 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:22.671 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.929 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.929 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.929 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.929 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.929 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.929 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.929 { 00:15:22.929 "cntlid": 97, 00:15:22.929 "qid": 0, 00:15:22.929 "state": "enabled", 00:15:22.929 "thread": "nvmf_tgt_poll_group_000", 00:15:22.929 "listen_address": { 00:15:22.929 "trtype": "TCP", 00:15:22.929 "adrfam": "IPv4", 00:15:22.929 "traddr": "10.0.0.2", 00:15:22.929 "trsvcid": "4420" 00:15:22.929 }, 00:15:22.929 "peer_address": { 00:15:22.929 "trtype": "TCP", 00:15:22.929 "adrfam": "IPv4", 00:15:22.929 "traddr": "10.0.0.1", 00:15:22.929 "trsvcid": "34076" 00:15:22.929 }, 00:15:22.929 "auth": { 00:15:22.929 "state": "completed", 00:15:22.929 "digest": "sha512", 00:15:22.929 "dhgroup": "null" 00:15:22.929 } 00:15:22.929 } 00:15:22.929 ]' 00:15:22.929 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.929 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:22.929 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:22.929 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:22.929 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:22.929 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.929 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.929 19:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.187 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:M2U2MmU1MTY0ZTBhYTIwZTg4NTVjMWJiODRlOTMxZDM4N2ZkOTUxYjMwMGYyMmMw34xEuQ==: --dhchap-ctrl-secret DHHC-1:03:N2EzYTQwMDYyMWUzOWI3M2U2MDZiOTZkZmQ5YjhmZDgxZjI0ZWZjNzRlNjUyYWEyOWNjYmExZTIwZWFjODhkMtVSbO4=: 00:15:24.560 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.560 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:24.560 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.560 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.560 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.560 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:24.560 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:24.560 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:24.818 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:24.818 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:24.818 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:24.818 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:24.818 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:24.818 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.818 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.818 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.818 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.818 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.818 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.818 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.076 00:15:25.076 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:25.076 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.076 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:25.334 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.334 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.334 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.334 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.334 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.334 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:25.334 { 00:15:25.334 "cntlid": 99, 00:15:25.334 "qid": 0, 00:15:25.334 "state": "enabled", 00:15:25.334 "thread": "nvmf_tgt_poll_group_000", 00:15:25.334 "listen_address": { 00:15:25.334 "trtype": "TCP", 00:15:25.334 "adrfam": "IPv4", 00:15:25.334 "traddr": "10.0.0.2", 00:15:25.334 "trsvcid": "4420" 00:15:25.334 }, 00:15:25.334 "peer_address": { 00:15:25.334 "trtype": "TCP", 00:15:25.334 "adrfam": "IPv4", 00:15:25.334 "traddr": "10.0.0.1", 00:15:25.334 "trsvcid": "34110" 00:15:25.334 }, 00:15:25.334 "auth": { 00:15:25.334 "state": "completed", 00:15:25.334 "digest": "sha512", 00:15:25.334 "dhgroup": "null" 00:15:25.334 } 00:15:25.334 } 00:15:25.334 ]' 00:15:25.334 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:25.334 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:25.335 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:25.335 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:25.335 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:25.592 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.592 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.592 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.850 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjZmZWE0M2UzNWE2NmE3MDE4NzE5YWU2MjhmY2Y1OWQUgtom: --dhchap-ctrl-secret DHHC-1:02:OTEzOTdmZWEyYmExMGQ4NWIxNDdmZmU3N2I5ZWZhYTVmN2U0YWNiM2JmNGQ2NjhiOWQYvA==: 00:15:27.223 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.223 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:27.223 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.223 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.223 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.223 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.223 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:27.223 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:27.223 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:27.223 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:27.223 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:27.223 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:27.223 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:27.223 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.223 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.223 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.223 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.223 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.223 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.223 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.789 00:15:27.789 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:27.789 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:27.789 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.047 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.047 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.047 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.047 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.047 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.047 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.047 { 00:15:28.047 "cntlid": 101, 00:15:28.047 "qid": 0, 00:15:28.047 "state": "enabled", 00:15:28.047 "thread": "nvmf_tgt_poll_group_000", 00:15:28.047 "listen_address": { 00:15:28.047 "trtype": "TCP", 00:15:28.047 "adrfam": "IPv4", 00:15:28.047 "traddr": "10.0.0.2", 00:15:28.047 "trsvcid": "4420" 00:15:28.047 }, 00:15:28.047 "peer_address": { 00:15:28.047 "trtype": "TCP", 00:15:28.047 "adrfam": "IPv4", 00:15:28.047 "traddr": "10.0.0.1", 00:15:28.047 "trsvcid": "34140" 00:15:28.047 }, 00:15:28.047 "auth": { 00:15:28.047 "state": "completed", 00:15:28.047 "digest": "sha512", 00:15:28.047 "dhgroup": "null" 00:15:28.047 } 00:15:28.047 } 00:15:28.047 ]' 00:15:28.047 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.047 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:28.047 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.047 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:28.047 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.047 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.047 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.047 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.305 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZTAxMjJjOTYyODRkNTgwMjUwM2VmYmZiMzM1NmJiNWMyY2I3YTlkNjU4ZWNiMDgxls2F4g==: --dhchap-ctrl-secret DHHC-1:01:OWJlZjA5MzlmOWRiNjA0ODZkOGY0NDQxOWU3ZTczZjMmn7gr: 00:15:29.678 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.678 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:29.678 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.678 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.678 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.678 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.678 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:29.678 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:29.936 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:15:29.936 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:29.936 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:29.936 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:29.936 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:29.936 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.936 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:15:29.936 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.936 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.936 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.936 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:29.936 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:30.194 00:15:30.194 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:30.194 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.194 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:30.453 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.453 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.453 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.453 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.453 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.453 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.453 { 00:15:30.453 "cntlid": 103, 00:15:30.453 "qid": 0, 00:15:30.453 "state": "enabled", 00:15:30.453 "thread": "nvmf_tgt_poll_group_000", 00:15:30.453 "listen_address": { 00:15:30.453 "trtype": "TCP", 00:15:30.453 "adrfam": "IPv4", 00:15:30.453 "traddr": "10.0.0.2", 00:15:30.453 "trsvcid": "4420" 00:15:30.453 }, 00:15:30.453 "peer_address": { 00:15:30.453 "trtype": "TCP", 00:15:30.453 "adrfam": "IPv4", 00:15:30.453 "traddr": "10.0.0.1", 00:15:30.453 "trsvcid": "43954" 00:15:30.453 }, 00:15:30.453 "auth": { 00:15:30.453 "state": "completed", 00:15:30.453 "digest": "sha512", 00:15:30.453 "dhgroup": "null" 00:15:30.453 } 00:15:30.453 } 00:15:30.453 ]' 00:15:30.453 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.453 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:30.711 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.711 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:30.711 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:30.711 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.711 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.711 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.968 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YzhjOTRjNWYyYTc1MzkyZDllYjBkZjJjZGEwYzU5MmU2YTdiOWU2MGU0ZTYzYWU2MmFkZGRjY2M5MDhmZTFlNLl9NnQ=: 00:15:32.348 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.348 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:32.348 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.348 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.348 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.348 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:32.348 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.348 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:32.348 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:32.348 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:15:32.348 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.348 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:32.348 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:32.348 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:32.348 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.348 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.348 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.348 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.348 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.348 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.348 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.968 00:15:32.968 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.968 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.968 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.227 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.227 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.227 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.227 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.227 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.227 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.227 { 00:15:33.227 "cntlid": 105, 00:15:33.227 "qid": 0, 00:15:33.227 "state": "enabled", 00:15:33.227 "thread": "nvmf_tgt_poll_group_000", 00:15:33.227 "listen_address": { 00:15:33.227 "trtype": "TCP", 00:15:33.227 "adrfam": "IPv4", 00:15:33.227 "traddr": "10.0.0.2", 00:15:33.227 "trsvcid": "4420" 00:15:33.227 }, 00:15:33.227 "peer_address": { 00:15:33.227 "trtype": "TCP", 00:15:33.227 "adrfam": "IPv4", 00:15:33.227 "traddr": "10.0.0.1", 00:15:33.227 "trsvcid": "43990" 00:15:33.227 }, 00:15:33.227 "auth": { 00:15:33.227 "state": "completed", 00:15:33.227 "digest": "sha512", 00:15:33.227 "dhgroup": "ffdhe2048" 00:15:33.227 } 00:15:33.227 } 00:15:33.227 ]' 00:15:33.227 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.227 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:33.227 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.227 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:33.227 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.227 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.227 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.227 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.485 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:M2U2MmU1MTY0ZTBhYTIwZTg4NTVjMWJiODRlOTMxZDM4N2ZkOTUxYjMwMGYyMmMw34xEuQ==: --dhchap-ctrl-secret DHHC-1:03:N2EzYTQwMDYyMWUzOWI3M2U2MDZiOTZkZmQ5YjhmZDgxZjI0ZWZjNzRlNjUyYWEyOWNjYmExZTIwZWFjODhkMtVSbO4=: 00:15:34.860 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.860 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:34.860 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.860 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.860 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.860 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.860 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:34.860 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:35.117 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:15:35.117 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:35.117 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:35.117 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:35.117 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:35.117 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.117 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.117 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.117 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.117 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.117 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.117 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.375 00:15:35.375 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.375 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.375 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.632 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.632 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.632 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.632 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.632 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.632 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.632 { 00:15:35.632 "cntlid": 107, 00:15:35.632 "qid": 0, 00:15:35.632 "state": "enabled", 00:15:35.633 "thread": "nvmf_tgt_poll_group_000", 00:15:35.633 "listen_address": { 00:15:35.633 "trtype": "TCP", 00:15:35.633 "adrfam": "IPv4", 00:15:35.633 "traddr": "10.0.0.2", 00:15:35.633 "trsvcid": "4420" 00:15:35.633 }, 00:15:35.633 "peer_address": { 00:15:35.633 "trtype": "TCP", 00:15:35.633 "adrfam": "IPv4", 00:15:35.633 "traddr": "10.0.0.1", 00:15:35.633 "trsvcid": "44022" 00:15:35.633 }, 00:15:35.633 "auth": { 00:15:35.633 "state": "completed", 00:15:35.633 "digest": "sha512", 00:15:35.633 "dhgroup": "ffdhe2048" 00:15:35.633 } 00:15:35.633 } 00:15:35.633 ]' 00:15:35.633 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.633 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:35.633 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.633 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:35.633 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.890 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.890 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.890 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.148 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjZmZWE0M2UzNWE2NmE3MDE4NzE5YWU2MjhmY2Y1OWQUgtom: --dhchap-ctrl-secret DHHC-1:02:OTEzOTdmZWEyYmExMGQ4NWIxNDdmZmU3N2I5ZWZhYTVmN2U0YWNiM2JmNGQ2NjhiOWQYvA==: 00:15:37.080 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.338 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:37.338 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.338 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.338 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.338 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:37.338 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:37.338 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:37.594 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:15:37.594 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:37.594 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:37.594 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:37.594 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:37.594 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.594 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.594 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.594 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.594 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.594 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.594 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.851 00:15:37.851 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.851 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.851 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:38.108 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.108 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.108 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.108 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.108 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.108 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:38.108 { 00:15:38.108 "cntlid": 109, 00:15:38.108 "qid": 0, 00:15:38.108 "state": "enabled", 00:15:38.108 "thread": "nvmf_tgt_poll_group_000", 00:15:38.108 "listen_address": { 00:15:38.108 "trtype": "TCP", 00:15:38.108 "adrfam": "IPv4", 00:15:38.108 "traddr": "10.0.0.2", 00:15:38.108 "trsvcid": "4420" 00:15:38.108 }, 00:15:38.108 "peer_address": { 00:15:38.108 "trtype": "TCP", 00:15:38.108 "adrfam": "IPv4", 00:15:38.108 "traddr": "10.0.0.1", 00:15:38.108 "trsvcid": "44054" 00:15:38.108 }, 00:15:38.108 "auth": { 00:15:38.108 "state": "completed", 00:15:38.109 "digest": "sha512", 00:15:38.109 "dhgroup": "ffdhe2048" 00:15:38.109 } 00:15:38.109 } 00:15:38.109 ]' 00:15:38.109 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:38.366 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:38.366 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:38.366 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:38.366 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:38.366 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.366 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.366 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.623 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZTAxMjJjOTYyODRkNTgwMjUwM2VmYmZiMzM1NmJiNWMyY2I3YTlkNjU4ZWNiMDgxls2F4g==: --dhchap-ctrl-secret DHHC-1:01:OWJlZjA5MzlmOWRiNjA0ODZkOGY0NDQxOWU3ZTczZjMmn7gr: 00:15:39.992 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.993 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:39.993 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.993 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.993 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.993 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:39.993 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:39.993 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:40.249 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:15:40.249 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.249 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:40.249 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:40.249 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:40.249 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.249 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:15:40.250 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.250 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.250 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.250 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.250 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.507 00:15:40.507 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:40.507 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.507 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:40.765 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.765 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.765 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.765 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.765 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.765 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:40.765 { 00:15:40.765 "cntlid": 111, 00:15:40.765 "qid": 0, 00:15:40.765 "state": "enabled", 00:15:40.765 "thread": "nvmf_tgt_poll_group_000", 00:15:40.765 "listen_address": { 00:15:40.765 "trtype": "TCP", 00:15:40.765 "adrfam": "IPv4", 00:15:40.765 "traddr": "10.0.0.2", 00:15:40.765 "trsvcid": "4420" 00:15:40.765 }, 00:15:40.765 "peer_address": { 00:15:40.765 "trtype": "TCP", 00:15:40.765 "adrfam": "IPv4", 00:15:40.765 "traddr": "10.0.0.1", 00:15:40.765 "trsvcid": "56966" 00:15:40.765 }, 00:15:40.765 "auth": { 00:15:40.765 "state": "completed", 00:15:40.765 "digest": "sha512", 00:15:40.765 "dhgroup": "ffdhe2048" 00:15:40.765 } 00:15:40.765 } 00:15:40.765 ]' 00:15:40.765 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:40.765 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:40.765 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.022 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:41.022 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.022 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.022 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.022 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.279 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YzhjOTRjNWYyYTc1MzkyZDllYjBkZjJjZGEwYzU5MmU2YTdiOWU2MGU0ZTYzYWU2MmFkZGRjY2M5MDhmZTFlNLl9NnQ=: 00:15:42.653 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.653 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:42.653 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.653 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.653 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.653 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.653 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.653 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:42.653 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:42.653 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:15:42.653 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:42.653 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:42.653 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:42.653 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:42.653 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.653 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.653 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.653 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.653 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.653 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.653 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.218 00:15:43.218 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.218 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.218 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.475 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.475 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.475 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.475 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.475 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.475 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.475 { 00:15:43.475 "cntlid": 113, 00:15:43.475 "qid": 0, 00:15:43.475 "state": "enabled", 00:15:43.475 "thread": "nvmf_tgt_poll_group_000", 00:15:43.475 "listen_address": { 00:15:43.475 "trtype": "TCP", 00:15:43.475 "adrfam": "IPv4", 00:15:43.475 "traddr": "10.0.0.2", 00:15:43.475 "trsvcid": "4420" 00:15:43.475 }, 00:15:43.475 "peer_address": { 00:15:43.475 "trtype": "TCP", 00:15:43.475 "adrfam": "IPv4", 00:15:43.475 "traddr": "10.0.0.1", 00:15:43.475 "trsvcid": "57002" 00:15:43.475 }, 00:15:43.475 "auth": { 00:15:43.475 "state": "completed", 00:15:43.475 "digest": "sha512", 00:15:43.475 "dhgroup": "ffdhe3072" 00:15:43.475 } 00:15:43.475 } 00:15:43.475 ]' 00:15:43.475 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.475 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:43.475 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.475 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:43.475 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:43.475 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.475 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.475 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.040 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:M2U2MmU1MTY0ZTBhYTIwZTg4NTVjMWJiODRlOTMxZDM4N2ZkOTUxYjMwMGYyMmMw34xEuQ==: --dhchap-ctrl-secret DHHC-1:03:N2EzYTQwMDYyMWUzOWI3M2U2MDZiOTZkZmQ5YjhmZDgxZjI0ZWZjNzRlNjUyYWEyOWNjYmExZTIwZWFjODhkMtVSbO4=: 00:15:44.973 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.973 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:44.973 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.973 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.973 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.973 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:44.973 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:44.973 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:45.539 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:15:45.539 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.539 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:45.539 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:45.539 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:45.539 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.539 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.539 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.539 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.539 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.539 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.539 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.797 00:15:45.797 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.797 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.797 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.055 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.055 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.055 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.055 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.055 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.055 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:46.055 { 00:15:46.055 "cntlid": 115, 00:15:46.055 "qid": 0, 00:15:46.055 "state": "enabled", 00:15:46.055 "thread": "nvmf_tgt_poll_group_000", 00:15:46.055 "listen_address": { 00:15:46.055 "trtype": "TCP", 00:15:46.055 "adrfam": "IPv4", 00:15:46.055 "traddr": "10.0.0.2", 00:15:46.055 "trsvcid": "4420" 00:15:46.055 }, 00:15:46.055 "peer_address": { 00:15:46.055 "trtype": "TCP", 00:15:46.055 "adrfam": "IPv4", 00:15:46.055 "traddr": "10.0.0.1", 00:15:46.055 "trsvcid": "57014" 00:15:46.055 }, 00:15:46.055 "auth": { 00:15:46.055 "state": "completed", 00:15:46.055 "digest": "sha512", 00:15:46.055 "dhgroup": "ffdhe3072" 00:15:46.055 } 00:15:46.055 } 00:15:46.055 ]' 00:15:46.055 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:46.055 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:46.055 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.055 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:46.055 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.313 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.313 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.313 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.571 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjZmZWE0M2UzNWE2NmE3MDE4NzE5YWU2MjhmY2Y1OWQUgtom: --dhchap-ctrl-secret DHHC-1:02:OTEzOTdmZWEyYmExMGQ4NWIxNDdmZmU3N2I5ZWZhYTVmN2U0YWNiM2JmNGQ2NjhiOWQYvA==: 00:15:47.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:47.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:47.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:47.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:15:47.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:47.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:47.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:47.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.509 00:15:48.510 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.510 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:48.510 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.767 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.767 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.767 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.767 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.767 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.768 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:48.768 { 00:15:48.768 "cntlid": 117, 00:15:48.768 "qid": 0, 00:15:48.768 "state": "enabled", 00:15:48.768 "thread": "nvmf_tgt_poll_group_000", 00:15:48.768 "listen_address": { 00:15:48.768 "trtype": "TCP", 00:15:48.768 "adrfam": "IPv4", 00:15:48.768 "traddr": "10.0.0.2", 00:15:48.768 "trsvcid": "4420" 00:15:48.768 }, 00:15:48.768 "peer_address": { 00:15:48.768 "trtype": "TCP", 00:15:48.768 "adrfam": "IPv4", 00:15:48.768 "traddr": "10.0.0.1", 00:15:48.768 "trsvcid": "57048" 00:15:48.768 }, 00:15:48.768 "auth": { 00:15:48.768 "state": "completed", 00:15:48.768 "digest": "sha512", 00:15:48.768 "dhgroup": "ffdhe3072" 00:15:48.768 } 00:15:48.768 } 00:15:48.768 ]' 00:15:48.768 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.768 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:48.768 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.768 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:48.768 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.768 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.768 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.768 19:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.333 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZTAxMjJjOTYyODRkNTgwMjUwM2VmYmZiMzM1NmJiNWMyY2I3YTlkNjU4ZWNiMDgxls2F4g==: --dhchap-ctrl-secret DHHC-1:01:OWJlZjA5MzlmOWRiNjA0ODZkOGY0NDQxOWU3ZTczZjMmn7gr: 00:15:50.266 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.266 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:50.266 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.266 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.266 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.266 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.266 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:50.266 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:50.524 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:15:50.524 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.524 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:50.524 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:50.524 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:50.524 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.524 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:15:50.524 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.524 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.524 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.524 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.524 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:51.089 00:15:51.089 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:51.089 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:51.089 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.347 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.347 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.347 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.347 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.347 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.347 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.347 { 00:15:51.347 "cntlid": 119, 00:15:51.347 "qid": 0, 00:15:51.347 "state": "enabled", 00:15:51.347 "thread": "nvmf_tgt_poll_group_000", 00:15:51.347 "listen_address": { 00:15:51.347 "trtype": "TCP", 00:15:51.347 "adrfam": "IPv4", 00:15:51.347 "traddr": "10.0.0.2", 00:15:51.347 "trsvcid": "4420" 00:15:51.347 }, 00:15:51.347 "peer_address": { 00:15:51.347 "trtype": "TCP", 00:15:51.347 "adrfam": "IPv4", 00:15:51.347 "traddr": "10.0.0.1", 00:15:51.347 "trsvcid": "60980" 00:15:51.347 }, 00:15:51.347 "auth": { 00:15:51.347 "state": "completed", 00:15:51.347 "digest": "sha512", 00:15:51.347 "dhgroup": "ffdhe3072" 00:15:51.347 } 00:15:51.347 } 00:15:51.347 ]' 00:15:51.347 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.347 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:51.347 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.347 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:51.347 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.347 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.347 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.347 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.605 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YzhjOTRjNWYyYTc1MzkyZDllYjBkZjJjZGEwYzU5MmU2YTdiOWU2MGU0ZTYzYWU2MmFkZGRjY2M5MDhmZTFlNLl9NnQ=: 00:15:52.978 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.978 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:52.978 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.978 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.978 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.978 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:52.978 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.978 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:52.979 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:52.979 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:15:52.979 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.979 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:52.979 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:52.979 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:52.979 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.979 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.979 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.979 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.979 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.979 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.979 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.545 00:15:53.545 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.545 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.545 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.803 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.803 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.803 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.803 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.803 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.803 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.803 { 00:15:53.803 "cntlid": 121, 00:15:53.803 "qid": 0, 00:15:53.803 "state": "enabled", 00:15:53.803 "thread": "nvmf_tgt_poll_group_000", 00:15:53.803 "listen_address": { 00:15:53.803 "trtype": "TCP", 00:15:53.803 "adrfam": "IPv4", 00:15:53.803 "traddr": "10.0.0.2", 00:15:53.803 "trsvcid": "4420" 00:15:53.803 }, 00:15:53.803 "peer_address": { 00:15:53.803 "trtype": "TCP", 00:15:53.803 "adrfam": "IPv4", 00:15:53.803 "traddr": "10.0.0.1", 00:15:53.803 "trsvcid": "32778" 00:15:53.803 }, 00:15:53.803 "auth": { 00:15:53.803 "state": "completed", 00:15:53.803 "digest": "sha512", 00:15:53.803 "dhgroup": "ffdhe4096" 00:15:53.803 } 00:15:53.803 } 00:15:53.803 ]' 00:15:53.803 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.803 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.803 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.803 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:53.803 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.061 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.061 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.061 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.319 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:M2U2MmU1MTY0ZTBhYTIwZTg4NTVjMWJiODRlOTMxZDM4N2ZkOTUxYjMwMGYyMmMw34xEuQ==: --dhchap-ctrl-secret DHHC-1:03:N2EzYTQwMDYyMWUzOWI3M2U2MDZiOTZkZmQ5YjhmZDgxZjI0ZWZjNzRlNjUyYWEyOWNjYmExZTIwZWFjODhkMtVSbO4=: 00:15:55.691 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.691 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:55.691 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.691 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.691 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.691 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.691 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:55.691 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:55.691 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:15:55.691 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.691 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:55.691 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:55.691 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:55.691 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.691 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.691 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.691 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.691 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.691 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.691 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.279 00:15:56.279 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:56.279 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:56.279 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.537 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.537 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.537 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.537 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.537 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.537 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:56.537 { 00:15:56.537 "cntlid": 123, 00:15:56.537 "qid": 0, 00:15:56.537 "state": "enabled", 00:15:56.537 "thread": "nvmf_tgt_poll_group_000", 00:15:56.537 "listen_address": { 00:15:56.537 "trtype": "TCP", 00:15:56.537 "adrfam": "IPv4", 00:15:56.537 "traddr": "10.0.0.2", 00:15:56.537 "trsvcid": "4420" 00:15:56.537 }, 00:15:56.537 "peer_address": { 00:15:56.537 "trtype": "TCP", 00:15:56.537 "adrfam": "IPv4", 00:15:56.537 "traddr": "10.0.0.1", 00:15:56.537 "trsvcid": "32806" 00:15:56.537 }, 00:15:56.537 "auth": { 00:15:56.537 "state": "completed", 00:15:56.537 "digest": "sha512", 00:15:56.537 "dhgroup": "ffdhe4096" 00:15:56.537 } 00:15:56.537 } 00:15:56.537 ]' 00:15:56.537 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:56.537 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:56.537 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.537 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:56.537 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:56.537 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.538 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.538 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.103 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjZmZWE0M2UzNWE2NmE3MDE4NzE5YWU2MjhmY2Y1OWQUgtom: --dhchap-ctrl-secret DHHC-1:02:OTEzOTdmZWEyYmExMGQ4NWIxNDdmZmU3N2I5ZWZhYTVmN2U0YWNiM2JmNGQ2NjhiOWQYvA==: 00:15:58.035 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.293 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:58.293 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.293 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.293 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.293 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.293 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:58.293 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:58.551 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:15:58.551 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.551 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:58.551 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:58.551 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:58.551 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.551 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.551 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.551 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.551 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.551 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.551 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.809 00:15:58.809 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.809 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.809 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.375 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.375 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.375 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.375 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.375 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.375 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.375 { 00:15:59.375 "cntlid": 125, 00:15:59.375 "qid": 0, 00:15:59.375 "state": "enabled", 00:15:59.375 "thread": "nvmf_tgt_poll_group_000", 00:15:59.375 "listen_address": { 00:15:59.375 "trtype": "TCP", 00:15:59.375 "adrfam": "IPv4", 00:15:59.375 "traddr": "10.0.0.2", 00:15:59.375 "trsvcid": "4420" 00:15:59.375 }, 00:15:59.375 "peer_address": { 00:15:59.375 "trtype": "TCP", 00:15:59.375 "adrfam": "IPv4", 00:15:59.375 "traddr": "10.0.0.1", 00:15:59.375 "trsvcid": "34114" 00:15:59.375 }, 00:15:59.375 "auth": { 00:15:59.375 "state": "completed", 00:15:59.375 "digest": "sha512", 00:15:59.375 "dhgroup": "ffdhe4096" 00:15:59.375 } 00:15:59.375 } 00:15:59.375 ]' 00:15:59.375 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.375 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:59.375 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.375 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:59.375 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.375 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.375 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.375 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.633 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZTAxMjJjOTYyODRkNTgwMjUwM2VmYmZiMzM1NmJiNWMyY2I3YTlkNjU4ZWNiMDgxls2F4g==: --dhchap-ctrl-secret DHHC-1:01:OWJlZjA5MzlmOWRiNjA0ODZkOGY0NDQxOWU3ZTczZjMmn7gr: 00:16:01.006 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.006 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:01.006 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.006 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.006 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.006 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.006 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:01.006 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:01.264 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:01.264 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.264 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:01.264 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:01.264 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:01.264 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.264 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:01.264 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.264 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.264 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.264 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.264 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.522 00:16:01.522 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.522 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.522 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.087 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.087 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.087 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.087 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.087 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.087 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.087 { 00:16:02.087 "cntlid": 127, 00:16:02.087 "qid": 0, 00:16:02.087 "state": "enabled", 00:16:02.087 "thread": "nvmf_tgt_poll_group_000", 00:16:02.087 "listen_address": { 00:16:02.087 "trtype": "TCP", 00:16:02.087 "adrfam": "IPv4", 00:16:02.087 "traddr": "10.0.0.2", 00:16:02.087 "trsvcid": "4420" 00:16:02.087 }, 00:16:02.087 "peer_address": { 00:16:02.087 "trtype": "TCP", 00:16:02.087 "adrfam": "IPv4", 00:16:02.087 "traddr": "10.0.0.1", 00:16:02.087 "trsvcid": "34144" 00:16:02.087 }, 00:16:02.087 "auth": { 00:16:02.087 "state": "completed", 00:16:02.087 "digest": "sha512", 00:16:02.087 "dhgroup": "ffdhe4096" 00:16:02.087 } 00:16:02.087 } 00:16:02.087 ]' 00:16:02.087 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:02.087 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:02.087 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:02.087 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:02.087 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:02.087 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.087 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.087 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.345 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YzhjOTRjNWYyYTc1MzkyZDllYjBkZjJjZGEwYzU5MmU2YTdiOWU2MGU0ZTYzYWU2MmFkZGRjY2M5MDhmZTFlNLl9NnQ=: 00:16:03.719 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.719 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:03.719 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.719 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.719 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.719 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:03.719 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.719 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:03.719 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:03.719 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:03.719 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.719 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:03.719 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:03.719 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:03.719 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.719 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.719 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.719 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.719 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.719 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.719 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.653 00:16:04.653 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.653 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.653 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.911 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.911 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.911 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.911 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.911 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.911 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.911 { 00:16:04.911 "cntlid": 129, 00:16:04.911 "qid": 0, 00:16:04.911 "state": "enabled", 00:16:04.911 "thread": "nvmf_tgt_poll_group_000", 00:16:04.911 "listen_address": { 00:16:04.911 "trtype": "TCP", 00:16:04.911 "adrfam": "IPv4", 00:16:04.911 "traddr": "10.0.0.2", 00:16:04.911 "trsvcid": "4420" 00:16:04.911 }, 00:16:04.911 "peer_address": { 00:16:04.911 "trtype": "TCP", 00:16:04.911 "adrfam": "IPv4", 00:16:04.911 "traddr": "10.0.0.1", 00:16:04.911 "trsvcid": "34166" 00:16:04.911 }, 00:16:04.911 "auth": { 00:16:04.911 "state": "completed", 00:16:04.911 "digest": "sha512", 00:16:04.911 "dhgroup": "ffdhe6144" 00:16:04.911 } 00:16:04.911 } 00:16:04.911 ]' 00:16:04.911 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.911 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:04.911 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.911 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:04.911 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.911 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.911 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.911 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.169 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:M2U2MmU1MTY0ZTBhYTIwZTg4NTVjMWJiODRlOTMxZDM4N2ZkOTUxYjMwMGYyMmMw34xEuQ==: --dhchap-ctrl-secret DHHC-1:03:N2EzYTQwMDYyMWUzOWI3M2U2MDZiOTZkZmQ5YjhmZDgxZjI0ZWZjNzRlNjUyYWEyOWNjYmExZTIwZWFjODhkMtVSbO4=: 00:16:06.543 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.543 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:06.543 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.543 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.543 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.543 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.543 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:06.543 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:06.801 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:06.801 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.801 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:06.801 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:06.801 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:06.801 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.801 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.801 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.801 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.801 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.801 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.801 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.367 00:16:07.367 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.367 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.367 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.624 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.624 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.624 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.624 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.624 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.624 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.624 { 00:16:07.624 "cntlid": 131, 00:16:07.624 "qid": 0, 00:16:07.624 "state": "enabled", 00:16:07.624 "thread": "nvmf_tgt_poll_group_000", 00:16:07.624 "listen_address": { 00:16:07.624 "trtype": "TCP", 00:16:07.624 "adrfam": "IPv4", 00:16:07.624 "traddr": "10.0.0.2", 00:16:07.624 "trsvcid": "4420" 00:16:07.624 }, 00:16:07.624 "peer_address": { 00:16:07.624 "trtype": "TCP", 00:16:07.624 "adrfam": "IPv4", 00:16:07.624 "traddr": "10.0.0.1", 00:16:07.624 "trsvcid": "34188" 00:16:07.624 }, 00:16:07.624 "auth": { 00:16:07.624 "state": "completed", 00:16:07.624 "digest": "sha512", 00:16:07.624 "dhgroup": "ffdhe6144" 00:16:07.624 } 00:16:07.624 } 00:16:07.624 ]' 00:16:07.624 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.624 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.624 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.882 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:07.882 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.882 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.882 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.882 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.140 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjZmZWE0M2UzNWE2NmE3MDE4NzE5YWU2MjhmY2Y1OWQUgtom: --dhchap-ctrl-secret DHHC-1:02:OTEzOTdmZWEyYmExMGQ4NWIxNDdmZmU3N2I5ZWZhYTVmN2U0YWNiM2JmNGQ2NjhiOWQYvA==: 00:16:09.512 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.512 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:09.512 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.512 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.512 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.512 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.512 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:09.512 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:09.512 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:09.512 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.512 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:09.512 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:09.512 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:09.512 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.512 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.512 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.512 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.512 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.512 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.512 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.078 00:16:10.336 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.336 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.336 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.593 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.593 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.593 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.593 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.593 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.593 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.593 { 00:16:10.593 "cntlid": 133, 00:16:10.593 "qid": 0, 00:16:10.593 "state": "enabled", 00:16:10.593 "thread": "nvmf_tgt_poll_group_000", 00:16:10.593 "listen_address": { 00:16:10.593 "trtype": "TCP", 00:16:10.593 "adrfam": "IPv4", 00:16:10.593 "traddr": "10.0.0.2", 00:16:10.593 "trsvcid": "4420" 00:16:10.593 }, 00:16:10.593 "peer_address": { 00:16:10.593 "trtype": "TCP", 00:16:10.593 "adrfam": "IPv4", 00:16:10.593 "traddr": "10.0.0.1", 00:16:10.593 "trsvcid": "44120" 00:16:10.593 }, 00:16:10.593 "auth": { 00:16:10.593 "state": "completed", 00:16:10.593 "digest": "sha512", 00:16:10.593 "dhgroup": "ffdhe6144" 00:16:10.593 } 00:16:10.593 } 00:16:10.593 ]' 00:16:10.593 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.593 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.593 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.593 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:10.593 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.594 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.594 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.594 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.852 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZTAxMjJjOTYyODRkNTgwMjUwM2VmYmZiMzM1NmJiNWMyY2I3YTlkNjU4ZWNiMDgxls2F4g==: --dhchap-ctrl-secret DHHC-1:01:OWJlZjA5MzlmOWRiNjA0ODZkOGY0NDQxOWU3ZTczZjMmn7gr: 00:16:12.224 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.224 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:12.224 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.224 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.224 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.224 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.224 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:12.224 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:12.481 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:12.481 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.481 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:12.481 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:12.481 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:12.481 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.481 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:12.481 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.481 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.481 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.481 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:12.481 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:13.046 00:16:13.046 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.046 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.046 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.304 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.304 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.304 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.304 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.304 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.304 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.304 { 00:16:13.304 "cntlid": 135, 00:16:13.304 "qid": 0, 00:16:13.304 "state": "enabled", 00:16:13.304 "thread": "nvmf_tgt_poll_group_000", 00:16:13.304 "listen_address": { 00:16:13.304 "trtype": "TCP", 00:16:13.304 "adrfam": "IPv4", 00:16:13.304 "traddr": "10.0.0.2", 00:16:13.304 "trsvcid": "4420" 00:16:13.304 }, 00:16:13.304 "peer_address": { 00:16:13.304 "trtype": "TCP", 00:16:13.304 "adrfam": "IPv4", 00:16:13.304 "traddr": "10.0.0.1", 00:16:13.304 "trsvcid": "44156" 00:16:13.304 }, 00:16:13.304 "auth": { 00:16:13.304 "state": "completed", 00:16:13.304 "digest": "sha512", 00:16:13.304 "dhgroup": "ffdhe6144" 00:16:13.304 } 00:16:13.304 } 00:16:13.304 ]' 00:16:13.304 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.561 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.561 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.561 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:13.561 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.561 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.561 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.561 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.818 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YzhjOTRjNWYyYTc1MzkyZDllYjBkZjJjZGEwYzU5MmU2YTdiOWU2MGU0ZTYzYWU2MmFkZGRjY2M5MDhmZTFlNLl9NnQ=: 00:16:15.189 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.189 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:15.189 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.189 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.189 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.189 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.190 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.190 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:15.190 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:15.190 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:15.190 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.190 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:15.190 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:15.190 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:15.190 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.190 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.190 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.190 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.190 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.190 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.190 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.561 00:16:16.561 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.561 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.561 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.561 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.561 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.561 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.561 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.561 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.561 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.561 { 00:16:16.561 "cntlid": 137, 00:16:16.561 "qid": 0, 00:16:16.561 "state": "enabled", 00:16:16.561 "thread": "nvmf_tgt_poll_group_000", 00:16:16.561 "listen_address": { 00:16:16.561 "trtype": "TCP", 00:16:16.561 "adrfam": "IPv4", 00:16:16.561 "traddr": "10.0.0.2", 00:16:16.561 "trsvcid": "4420" 00:16:16.561 }, 00:16:16.561 "peer_address": { 00:16:16.561 "trtype": "TCP", 00:16:16.561 "adrfam": "IPv4", 00:16:16.561 "traddr": "10.0.0.1", 00:16:16.561 "trsvcid": "44180" 00:16:16.561 }, 00:16:16.561 "auth": { 00:16:16.561 "state": "completed", 00:16:16.561 "digest": "sha512", 00:16:16.561 "dhgroup": "ffdhe8192" 00:16:16.561 } 00:16:16.561 } 00:16:16.561 ]' 00:16:16.561 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.819 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:16.819 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.819 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:16.819 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.819 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.819 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.819 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.076 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:M2U2MmU1MTY0ZTBhYTIwZTg4NTVjMWJiODRlOTMxZDM4N2ZkOTUxYjMwMGYyMmMw34xEuQ==: --dhchap-ctrl-secret DHHC-1:03:N2EzYTQwMDYyMWUzOWI3M2U2MDZiOTZkZmQ5YjhmZDgxZjI0ZWZjNzRlNjUyYWEyOWNjYmExZTIwZWFjODhkMtVSbO4=: 00:16:18.450 19:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.450 19:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:18.450 19:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.450 19:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.450 19:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.450 19:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.450 19:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:18.450 19:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:18.450 19:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:18.450 19:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.450 19:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:18.450 19:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:18.450 19:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:18.450 19:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.450 19:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.450 19:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.450 19:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.450 19:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.450 19:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.450 19:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.823 00:16:19.823 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.823 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.823 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.823 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.823 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.823 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.823 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.823 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.823 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.823 { 00:16:19.823 "cntlid": 139, 00:16:19.823 "qid": 0, 00:16:19.823 "state": "enabled", 00:16:19.823 "thread": "nvmf_tgt_poll_group_000", 00:16:19.823 "listen_address": { 00:16:19.823 "trtype": "TCP", 00:16:19.823 "adrfam": "IPv4", 00:16:19.823 "traddr": "10.0.0.2", 00:16:19.823 "trsvcid": "4420" 00:16:19.823 }, 00:16:19.823 "peer_address": { 00:16:19.823 "trtype": "TCP", 00:16:19.823 "adrfam": "IPv4", 00:16:19.823 "traddr": "10.0.0.1", 00:16:19.823 "trsvcid": "40236" 00:16:19.823 }, 00:16:19.823 "auth": { 00:16:19.823 "state": "completed", 00:16:19.823 "digest": "sha512", 00:16:19.823 "dhgroup": "ffdhe8192" 00:16:19.823 } 00:16:19.824 } 00:16:19.824 ]' 00:16:19.824 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.824 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.081 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.081 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:20.081 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.081 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.081 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.081 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.339 19:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjZmZWE0M2UzNWE2NmE3MDE4NzE5YWU2MjhmY2Y1OWQUgtom: --dhchap-ctrl-secret DHHC-1:02:OTEzOTdmZWEyYmExMGQ4NWIxNDdmZmU3N2I5ZWZhYTVmN2U0YWNiM2JmNGQ2NjhiOWQYvA==: 00:16:21.775 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.775 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:21.775 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.775 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.775 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.775 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.775 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:21.775 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:21.775 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:21.775 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.775 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:21.775 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:21.775 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:21.775 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.775 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.775 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.775 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.775 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.775 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.775 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.149 00:16:23.149 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.149 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.149 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.149 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.149 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.149 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.149 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.149 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.149 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.149 { 00:16:23.149 "cntlid": 141, 00:16:23.149 "qid": 0, 00:16:23.149 "state": "enabled", 00:16:23.149 "thread": "nvmf_tgt_poll_group_000", 00:16:23.149 "listen_address": { 00:16:23.149 "trtype": "TCP", 00:16:23.149 "adrfam": "IPv4", 00:16:23.149 "traddr": "10.0.0.2", 00:16:23.149 "trsvcid": "4420" 00:16:23.149 }, 00:16:23.149 "peer_address": { 00:16:23.149 "trtype": "TCP", 00:16:23.149 "adrfam": "IPv4", 00:16:23.149 "traddr": "10.0.0.1", 00:16:23.149 "trsvcid": "40264" 00:16:23.149 }, 00:16:23.149 "auth": { 00:16:23.149 "state": "completed", 00:16:23.149 "digest": "sha512", 00:16:23.149 "dhgroup": "ffdhe8192" 00:16:23.149 } 00:16:23.149 } 00:16:23.149 ]' 00:16:23.149 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.149 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.149 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.149 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:23.149 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.407 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.407 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.407 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.665 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZTAxMjJjOTYyODRkNTgwMjUwM2VmYmZiMzM1NmJiNWMyY2I3YTlkNjU4ZWNiMDgxls2F4g==: --dhchap-ctrl-secret DHHC-1:01:OWJlZjA5MzlmOWRiNjA0ODZkOGY0NDQxOWU3ZTczZjMmn7gr: 00:16:25.040 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.040 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:25.040 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.040 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.040 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.040 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:25.040 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:25.040 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:25.040 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:25.040 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.040 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:25.040 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:25.040 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:25.040 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.040 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:25.040 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.040 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.040 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.040 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:25.040 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:25.972 00:16:26.229 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.229 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.229 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.487 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.487 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.487 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.487 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.487 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.487 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.487 { 00:16:26.487 "cntlid": 143, 00:16:26.487 "qid": 0, 00:16:26.487 "state": "enabled", 00:16:26.487 "thread": "nvmf_tgt_poll_group_000", 00:16:26.487 "listen_address": { 00:16:26.487 "trtype": "TCP", 00:16:26.487 "adrfam": "IPv4", 00:16:26.487 "traddr": "10.0.0.2", 00:16:26.487 "trsvcid": "4420" 00:16:26.487 }, 00:16:26.487 "peer_address": { 00:16:26.487 "trtype": "TCP", 00:16:26.487 "adrfam": "IPv4", 00:16:26.487 "traddr": "10.0.0.1", 00:16:26.487 "trsvcid": "40286" 00:16:26.487 }, 00:16:26.487 "auth": { 00:16:26.487 "state": "completed", 00:16:26.487 "digest": "sha512", 00:16:26.487 "dhgroup": "ffdhe8192" 00:16:26.487 } 00:16:26.487 } 00:16:26.487 ]' 00:16:26.487 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.487 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.487 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.487 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:26.487 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.487 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.487 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.487 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.744 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YzhjOTRjNWYyYTc1MzkyZDllYjBkZjJjZGEwYzU5MmU2YTdiOWU2MGU0ZTYzYWU2MmFkZGRjY2M5MDhmZTFlNLl9NnQ=: 00:16:28.114 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.115 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:28.115 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.115 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.115 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.115 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:28.115 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:28.115 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:28.115 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:28.115 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:28.115 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:28.372 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:28.372 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.372 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:28.372 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:28.372 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:28.372 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.372 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.372 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.372 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.372 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.372 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.372 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.304 00:16:29.304 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.304 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.305 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.563 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.563 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.563 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.563 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.563 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.563 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.563 { 00:16:29.563 "cntlid": 145, 00:16:29.563 "qid": 0, 00:16:29.563 "state": "enabled", 00:16:29.563 "thread": "nvmf_tgt_poll_group_000", 00:16:29.563 "listen_address": { 00:16:29.563 "trtype": "TCP", 00:16:29.563 "adrfam": "IPv4", 00:16:29.563 "traddr": "10.0.0.2", 00:16:29.563 "trsvcid": "4420" 00:16:29.563 }, 00:16:29.563 "peer_address": { 00:16:29.563 "trtype": "TCP", 00:16:29.563 "adrfam": "IPv4", 00:16:29.563 "traddr": "10.0.0.1", 00:16:29.563 "trsvcid": "40308" 00:16:29.563 }, 00:16:29.563 "auth": { 00:16:29.563 "state": "completed", 00:16:29.563 "digest": "sha512", 00:16:29.563 "dhgroup": "ffdhe8192" 00:16:29.563 } 00:16:29.563 } 00:16:29.563 ]' 00:16:29.563 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.820 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.820 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.820 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:29.820 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.820 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.820 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.820 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.078 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:M2U2MmU1MTY0ZTBhYTIwZTg4NTVjMWJiODRlOTMxZDM4N2ZkOTUxYjMwMGYyMmMw34xEuQ==: --dhchap-ctrl-secret DHHC-1:03:N2EzYTQwMDYyMWUzOWI3M2U2MDZiOTZkZmQ5YjhmZDgxZjI0ZWZjNzRlNjUyYWEyOWNjYmExZTIwZWFjODhkMtVSbO4=: 00:16:31.450 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.450 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:31.450 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.450 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.450 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.450 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 00:16:31.450 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.450 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.450 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.450 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:31.450 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:31.450 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:31.450 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:31.450 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.450 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:31.450 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.450 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:31.450 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:32.383 request: 00:16:32.383 { 00:16:32.383 "name": "nvme0", 00:16:32.383 "trtype": "tcp", 00:16:32.383 "traddr": "10.0.0.2", 00:16:32.383 "adrfam": "ipv4", 00:16:32.383 "trsvcid": "4420", 00:16:32.383 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:32.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:16:32.383 "prchk_reftag": false, 00:16:32.383 "prchk_guard": false, 00:16:32.383 "hdgst": false, 00:16:32.383 "ddgst": false, 00:16:32.383 "dhchap_key": "key2", 00:16:32.383 "method": "bdev_nvme_attach_controller", 00:16:32.383 "req_id": 1 00:16:32.383 } 00:16:32.383 Got JSON-RPC error response 00:16:32.383 response: 00:16:32.383 { 00:16:32.383 "code": -5, 00:16:32.383 "message": "Input/output error" 00:16:32.383 } 00:16:32.383 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:32.383 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:32.383 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:32.383 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:32.383 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:32.383 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.383 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.383 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.383 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.383 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.383 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.383 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.383 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:32.383 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:32.383 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:32.383 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:32.383 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:32.383 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:32.384 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:32.384 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:32.384 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:33.317 request: 00:16:33.317 { 00:16:33.317 "name": "nvme0", 00:16:33.317 "trtype": "tcp", 00:16:33.317 "traddr": "10.0.0.2", 00:16:33.317 "adrfam": "ipv4", 00:16:33.317 "trsvcid": "4420", 00:16:33.317 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:33.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:16:33.317 "prchk_reftag": false, 00:16:33.317 "prchk_guard": false, 00:16:33.317 "hdgst": false, 00:16:33.317 "ddgst": false, 00:16:33.317 "dhchap_key": "key1", 00:16:33.317 "dhchap_ctrlr_key": "ckey2", 00:16:33.317 "method": "bdev_nvme_attach_controller", 00:16:33.317 "req_id": 1 00:16:33.317 } 00:16:33.317 Got JSON-RPC error response 00:16:33.317 response: 00:16:33.317 { 00:16:33.317 "code": -5, 00:16:33.317 "message": "Input/output error" 00:16:33.317 } 00:16:33.317 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:33.317 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:33.317 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:33.317 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:33.317 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:33.317 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.317 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.317 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.317 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 00:16:33.317 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.317 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.317 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.317 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.317 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:33.317 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.317 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:33.317 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:33.317 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:33.317 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:33.317 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.317 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.251 request: 00:16:34.251 { 00:16:34.251 "name": "nvme0", 00:16:34.251 "trtype": "tcp", 00:16:34.251 "traddr": "10.0.0.2", 00:16:34.251 "adrfam": "ipv4", 00:16:34.251 "trsvcid": "4420", 00:16:34.251 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:34.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:16:34.251 "prchk_reftag": false, 00:16:34.251 "prchk_guard": false, 00:16:34.251 "hdgst": false, 00:16:34.251 "ddgst": false, 00:16:34.251 "dhchap_key": "key1", 00:16:34.251 "dhchap_ctrlr_key": "ckey1", 00:16:34.251 "method": "bdev_nvme_attach_controller", 00:16:34.251 "req_id": 1 00:16:34.251 } 00:16:34.251 Got JSON-RPC error response 00:16:34.251 response: 00:16:34.251 { 00:16:34.251 "code": -5, 00:16:34.251 "message": "Input/output error" 00:16:34.251 } 00:16:34.251 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:34.251 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:34.251 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:34.251 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:34.251 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:34.251 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.251 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.251 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.251 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2551604 00:16:34.251 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2551604 ']' 00:16:34.251 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2551604 00:16:34.251 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:34.251 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:34.251 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2551604 00:16:34.251 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:34.252 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:34.252 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2551604' 00:16:34.252 killing process with pid 2551604 00:16:34.252 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2551604 00:16:34.252 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2551604 00:16:34.510 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:34.510 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:34.510 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:34.510 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.510 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2572268 00:16:34.510 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:34.510 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2572268 00:16:34.510 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2572268 ']' 00:16:34.510 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.510 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:34.510 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.510 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:34.510 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.768 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:34.768 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:34.768 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:34.768 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:34.768 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.768 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.768 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:34.768 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2572268 00:16:34.768 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2572268 ']' 00:16:34.768 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.768 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:34.768 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.768 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:34.768 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.026 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:35.026 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:35.026 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:16:35.026 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.026 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.285 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.285 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:16:35.285 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.285 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:35.285 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:35.285 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:35.285 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.285 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:35.285 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.285 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.285 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.285 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.285 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:36.218 00:16:36.218 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.218 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.218 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.476 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.476 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.476 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.476 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.476 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.476 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.476 { 00:16:36.476 "cntlid": 1, 00:16:36.476 "qid": 0, 00:16:36.476 "state": "enabled", 00:16:36.476 "thread": "nvmf_tgt_poll_group_000", 00:16:36.476 "listen_address": { 00:16:36.476 "trtype": "TCP", 00:16:36.476 "adrfam": "IPv4", 00:16:36.476 "traddr": "10.0.0.2", 00:16:36.476 "trsvcid": "4420" 00:16:36.476 }, 00:16:36.476 "peer_address": { 00:16:36.476 "trtype": "TCP", 00:16:36.476 "adrfam": "IPv4", 00:16:36.476 "traddr": "10.0.0.1", 00:16:36.476 "trsvcid": "51672" 00:16:36.476 }, 00:16:36.476 "auth": { 00:16:36.476 "state": "completed", 00:16:36.476 "digest": "sha512", 00:16:36.476 "dhgroup": "ffdhe8192" 00:16:36.476 } 00:16:36.476 } 00:16:36.476 ]' 00:16:36.476 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.734 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.734 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.734 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:36.734 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.734 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.734 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.734 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.992 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YzhjOTRjNWYyYTc1MzkyZDllYjBkZjJjZGEwYzU5MmU2YTdiOWU2MGU0ZTYzYWU2MmFkZGRjY2M5MDhmZTFlNLl9NnQ=: 00:16:38.364 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.364 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:38.364 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.364 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.364 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.364 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:38.364 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.364 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.364 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.364 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:38.364 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:38.622 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.622 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:38.622 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.622 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:38.622 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.622 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:38.622 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.622 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.622 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.880 request: 00:16:38.880 { 00:16:38.880 "name": "nvme0", 00:16:38.880 "trtype": "tcp", 00:16:38.880 "traddr": "10.0.0.2", 00:16:38.880 "adrfam": "ipv4", 00:16:38.880 "trsvcid": "4420", 00:16:38.880 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:38.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:16:38.880 "prchk_reftag": false, 00:16:38.880 "prchk_guard": false, 00:16:38.880 "hdgst": false, 00:16:38.880 "ddgst": false, 00:16:38.880 "dhchap_key": "key3", 00:16:38.880 "method": "bdev_nvme_attach_controller", 00:16:38.880 "req_id": 1 00:16:38.880 } 00:16:38.880 Got JSON-RPC error response 00:16:38.880 response: 00:16:38.880 { 00:16:38.880 "code": -5, 00:16:38.880 "message": "Input/output error" 00:16:38.880 } 00:16:38.880 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:38.880 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:38.880 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:38.880 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:38.880 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:16:38.880 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:16:38.880 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:38.881 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:39.145 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:39.145 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:39.145 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:39.145 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:39.145 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:39.145 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:39.145 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:39.145 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:39.145 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:39.403 request: 00:16:39.403 { 00:16:39.403 "name": "nvme0", 00:16:39.403 "trtype": "tcp", 00:16:39.403 "traddr": "10.0.0.2", 00:16:39.403 "adrfam": "ipv4", 00:16:39.403 "trsvcid": "4420", 00:16:39.403 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:39.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:16:39.403 "prchk_reftag": false, 00:16:39.403 "prchk_guard": false, 00:16:39.403 "hdgst": false, 00:16:39.403 "ddgst": false, 00:16:39.403 "dhchap_key": "key3", 00:16:39.403 "method": "bdev_nvme_attach_controller", 00:16:39.403 "req_id": 1 00:16:39.403 } 00:16:39.403 Got JSON-RPC error response 00:16:39.403 response: 00:16:39.403 { 00:16:39.403 "code": -5, 00:16:39.403 "message": "Input/output error" 00:16:39.403 } 00:16:39.403 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:39.403 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:39.403 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:39.403 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:39.403 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:39.403 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:16:39.403 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:39.403 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:39.403 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:39.403 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:39.660 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:39.660 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.660 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.660 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.660 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:39.660 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.660 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.660 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.661 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:39.661 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:39.661 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:39.661 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:39.661 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:39.661 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:39.661 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:39.661 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:39.661 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:40.227 request: 00:16:40.227 { 00:16:40.227 "name": "nvme0", 00:16:40.227 "trtype": "tcp", 00:16:40.227 "traddr": "10.0.0.2", 00:16:40.227 "adrfam": "ipv4", 00:16:40.227 "trsvcid": "4420", 00:16:40.227 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:40.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:16:40.227 "prchk_reftag": false, 00:16:40.227 "prchk_guard": false, 00:16:40.227 "hdgst": false, 00:16:40.227 "ddgst": false, 00:16:40.227 "dhchap_key": "key0", 00:16:40.227 "dhchap_ctrlr_key": "key1", 00:16:40.227 "method": "bdev_nvme_attach_controller", 00:16:40.227 "req_id": 1 00:16:40.227 } 00:16:40.227 Got JSON-RPC error response 00:16:40.227 response: 00:16:40.227 { 00:16:40.227 "code": -5, 00:16:40.227 "message": "Input/output error" 00:16:40.227 } 00:16:40.227 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:40.227 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:40.227 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:40.227 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:40.227 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:40.227 19:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:40.485 00:16:40.485 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:16:40.485 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:16:40.485 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.743 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.743 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.743 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.002 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:16:41.002 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:16:41.002 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2551653 00:16:41.002 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2551653 ']' 00:16:41.002 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2551653 00:16:41.002 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:41.002 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:41.002 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2551653 00:16:41.002 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:41.002 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:41.002 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2551653' 00:16:41.002 killing process with pid 2551653 00:16:41.002 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2551653 00:16:41.002 19:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2551653 00:16:41.568 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:41.568 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:41.568 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:16:41.568 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:41.568 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:16:41.568 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:41.569 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:41.569 rmmod nvme_tcp 00:16:41.569 rmmod nvme_fabrics 00:16:41.569 rmmod nvme_keyring 00:16:41.569 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:41.569 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:16:41.569 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:16:41.569 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2572268 ']' 00:16:41.569 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2572268 00:16:41.569 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2572268 ']' 00:16:41.569 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2572268 00:16:41.569 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:41.569 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:41.569 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2572268 00:16:41.569 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:41.569 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:41.569 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2572268' 00:16:41.569 killing process with pid 2572268 00:16:41.569 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2572268 00:16:41.569 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2572268 00:16:41.829 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:41.829 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:41.829 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:41.829 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:41.829 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:41.829 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.829 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:41.829 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.0fk /tmp/spdk.key-sha256.CSY /tmp/spdk.key-sha384.JaM /tmp/spdk.key-sha512.VHG /tmp/spdk.key-sha512.IfS /tmp/spdk.key-sha384.Fib /tmp/spdk.key-sha256.Dtt '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:16:43.741 00:16:43.741 real 3m39.636s 00:16:43.741 user 8m32.340s 00:16:43.741 sys 0m25.973s 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.741 ************************************ 00:16:43.741 END TEST nvmf_auth_target 00:16:43.741 ************************************ 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:43.741 ************************************ 00:16:43.741 START TEST nvmf_bdevio_no_huge 00:16:43.741 ************************************ 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:43.741 * Looking for test storage... 00:16:43.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.741 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.742 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.742 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:43.742 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.742 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:16:43.742 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:43.742 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:43.742 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.742 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.742 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.742 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:43.742 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:43.742 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:44.000 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:44.000 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:44.000 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:44.000 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:44.000 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.000 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:44.000 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:44.000 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:44.000 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.000 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:44.000 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.000 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:44.000 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:44.000 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:16:44.000 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:45.955 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:16:45.955 Found 0000:08:00.0 (0x8086 - 0x159b) 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:16:45.956 Found 0000:08:00.1 (0x8086 - 0x159b) 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:16:45.956 Found net devices under 0000:08:00.0: cvl_0_0 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:16:45.956 Found net devices under 0000:08:00.1: cvl_0_1 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:45.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:16:45.956 00:16:45.956 --- 10.0.0.2 ping statistics --- 00:16:45.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.956 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:16:45.956 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:45.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:16:45.956 00:16:45.956 --- 10.0.0.1 ping statistics --- 00:16:45.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.956 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:16:45.957 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.957 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:16:45.957 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:45.957 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.957 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:45.957 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:45.957 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.957 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:45.957 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:45.957 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:45.957 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:45.957 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:45.957 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:45.957 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2574417 00:16:45.957 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2574417 00:16:45.957 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:45.957 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 2574417 ']' 00:16:45.957 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.957 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:45.957 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.957 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:45.957 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:45.957 [2024-07-24 19:13:51.667598] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:16:45.957 [2024-07-24 19:13:51.667704] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:45.957 [2024-07-24 19:13:51.741709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:45.957 [2024-07-24 19:13:51.863515] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.957 [2024-07-24 19:13:51.863577] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.957 [2024-07-24 19:13:51.863593] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.957 [2024-07-24 19:13:51.863607] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.957 [2024-07-24 19:13:51.863619] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.957 [2024-07-24 19:13:51.863720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:45.957 [2024-07-24 19:13:51.863789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:45.957 [2024-07-24 19:13:51.863830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:45.957 [2024-07-24 19:13:51.863832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.225 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:46.225 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:16:46.225 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:46.225 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:46.225 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:46.225 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.225 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:46.225 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.225 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:46.225 [2024-07-24 19:13:51.985606] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.225 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.225 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:46.225 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.225 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:46.225 Malloc0 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:46.225 [2024-07-24 19:13:52.023861] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:46.225 { 00:16:46.225 "params": { 00:16:46.225 "name": "Nvme$subsystem", 00:16:46.225 "trtype": "$TEST_TRANSPORT", 00:16:46.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:46.225 "adrfam": "ipv4", 00:16:46.225 "trsvcid": "$NVMF_PORT", 00:16:46.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:46.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:46.225 "hdgst": ${hdgst:-false}, 00:16:46.225 "ddgst": ${ddgst:-false} 00:16:46.225 }, 00:16:46.225 "method": "bdev_nvme_attach_controller" 00:16:46.225 } 00:16:46.225 EOF 00:16:46.225 )") 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:16:46.225 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:46.225 "params": { 00:16:46.225 "name": "Nvme1", 00:16:46.225 "trtype": "tcp", 00:16:46.225 "traddr": "10.0.0.2", 00:16:46.225 "adrfam": "ipv4", 00:16:46.225 "trsvcid": "4420", 00:16:46.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:46.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:46.225 "hdgst": false, 00:16:46.225 "ddgst": false 00:16:46.225 }, 00:16:46.225 "method": "bdev_nvme_attach_controller" 00:16:46.225 }' 00:16:46.225 [2024-07-24 19:13:52.073313] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:16:46.225 [2024-07-24 19:13:52.073411] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2574453 ] 00:16:46.225 [2024-07-24 19:13:52.139343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:46.483 [2024-07-24 19:13:52.261411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.483 [2024-07-24 19:13:52.261460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.483 [2024-07-24 19:13:52.261463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.741 I/O targets: 00:16:46.741 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:46.741 00:16:46.741 00:16:46.741 CUnit - A unit testing framework for C - Version 2.1-3 00:16:46.741 http://cunit.sourceforge.net/ 00:16:46.741 00:16:46.741 00:16:46.741 Suite: bdevio tests on: Nvme1n1 00:16:46.741 Test: blockdev write read block ...passed 00:16:46.741 Test: blockdev write zeroes read block ...passed 00:16:46.741 Test: blockdev write zeroes read no split ...passed 00:16:46.741 Test: blockdev write zeroes read split ...passed 00:16:46.741 Test: blockdev write zeroes read split partial ...passed 00:16:46.741 Test: blockdev reset ...[2024-07-24 19:13:52.717172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:46.741 [2024-07-24 19:13:52.717310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5570 (9): Bad file descriptor 00:16:46.999 [2024-07-24 19:13:52.818460] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:46.999 passed 00:16:46.999 Test: blockdev write read 8 blocks ...passed 00:16:46.999 Test: blockdev write read size > 128k ...passed 00:16:46.999 Test: blockdev write read invalid size ...passed 00:16:46.999 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:46.999 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:46.999 Test: blockdev write read max offset ...passed 00:16:46.999 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:46.999 Test: blockdev writev readv 8 blocks ...passed 00:16:46.999 Test: blockdev writev readv 30 x 1block ...passed 00:16:47.257 Test: blockdev writev readv block ...passed 00:16:47.257 Test: blockdev writev readv size > 128k ...passed 00:16:47.257 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:47.257 Test: blockdev comparev and writev ...[2024-07-24 19:13:53.033286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.257 [2024-07-24 19:13:53.033328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.257 [2024-07-24 19:13:53.033355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.257 [2024-07-24 19:13:53.033373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:47.257 [2024-07-24 19:13:53.033729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.257 [2024-07-24 19:13:53.033756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:47.257 [2024-07-24 19:13:53.033780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.257 [2024-07-24 19:13:53.033798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:47.257 [2024-07-24 19:13:53.034153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.257 [2024-07-24 19:13:53.034187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:47.257 [2024-07-24 19:13:53.034212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.257 [2024-07-24 19:13:53.034229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:47.257 [2024-07-24 19:13:53.034583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.257 [2024-07-24 19:13:53.034609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:47.257 [2024-07-24 19:13:53.034633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:47.257 [2024-07-24 19:13:53.034650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:47.257 passed 00:16:47.257 Test: blockdev nvme passthru rw ...passed 00:16:47.257 Test: blockdev nvme passthru vendor specific ...[2024-07-24 19:13:53.118819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:47.258 [2024-07-24 19:13:53.118850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:47.258 [2024-07-24 19:13:53.119019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:47.258 [2024-07-24 19:13:53.119043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:47.258 [2024-07-24 19:13:53.119210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:47.258 [2024-07-24 19:13:53.119233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:47.258 [2024-07-24 19:13:53.119404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:47.258 [2024-07-24 19:13:53.119427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:47.258 passed 00:16:47.258 Test: blockdev nvme admin passthru ...passed 00:16:47.258 Test: blockdev copy ...passed 00:16:47.258 00:16:47.258 Run Summary: Type Total Ran Passed Failed Inactive 00:16:47.258 suites 1 1 n/a 0 0 00:16:47.258 tests 23 23 23 0 0 00:16:47.258 asserts 152 152 152 0 n/a 00:16:47.258 00:16:47.258 Elapsed time = 1.251 seconds 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:47.824 rmmod nvme_tcp 00:16:47.824 rmmod nvme_fabrics 00:16:47.824 rmmod nvme_keyring 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2574417 ']' 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2574417 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 2574417 ']' 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 2574417 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2574417 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2574417' 00:16:47.824 killing process with pid 2574417 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 2574417 00:16:47.824 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 2574417 00:16:48.084 19:13:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:48.084 19:13:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:48.084 19:13:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:48.084 19:13:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:48.084 19:13:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:48.084 19:13:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.084 19:13:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.084 19:13:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:50.632 00:16:50.632 real 0m6.424s 00:16:50.632 user 0m11.533s 00:16:50.632 sys 0m2.303s 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:50.632 ************************************ 00:16:50.632 END TEST nvmf_bdevio_no_huge 00:16:50.632 ************************************ 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:50.632 ************************************ 00:16:50.632 START TEST nvmf_tls 00:16:50.632 ************************************ 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:50.632 * Looking for test storage... 00:16:50.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:16:50.632 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:16:52.014 Found 0000:08:00.0 (0x8086 - 0x159b) 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:16:52.014 Found 0000:08:00.1 (0x8086 - 0x159b) 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:16:52.014 Found net devices under 0000:08:00.0: cvl_0_0 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:16:52.014 Found net devices under 0000:08:00.1: cvl_0_1 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:52.014 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:52.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:16:52.015 00:16:52.015 --- 10.0.0.2 ping statistics --- 00:16:52.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.015 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:52.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:16:52.015 00:16:52.015 --- 10.0.0.1 ping statistics --- 00:16:52.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.015 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2576141 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2576141 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2576141 ']' 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.015 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.274 [2024-07-24 19:13:58.027606] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:16:52.274 [2024-07-24 19:13:58.027698] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.274 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.274 [2024-07-24 19:13:58.095052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.274 [2024-07-24 19:13:58.211250] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.274 [2024-07-24 19:13:58.211311] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.274 [2024-07-24 19:13:58.211336] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.274 [2024-07-24 19:13:58.211350] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.274 [2024-07-24 19:13:58.211362] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.274 [2024-07-24 19:13:58.211397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.274 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:52.274 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:52.274 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:52.274 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:52.274 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.533 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.533 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:16:52.533 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:52.790 true 00:16:52.790 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:16:52.790 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:53.047 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:16:53.047 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:16:53.047 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:53.305 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:53.305 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:16:53.562 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:16:53.562 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:16:53.562 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:53.820 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:16:53.820 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:54.078 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:16:54.078 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:16:54.078 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:54.078 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:16:54.336 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:16:54.336 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:16:54.336 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:54.594 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:16:54.594 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:54.851 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:16:54.851 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:16:54.851 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:55.109 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:55.109 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.uzMo9VEw6m 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.5Dwf8nqjR2 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.uzMo9VEw6m 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.5Dwf8nqjR2 00:16:55.367 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:55.625 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:16:55.882 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.uzMo9VEw6m 00:16:55.882 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.uzMo9VEw6m 00:16:55.882 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:56.139 [2024-07-24 19:14:02.114684] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:56.139 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:56.396 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:56.654 [2024-07-24 19:14:02.603975] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:56.654 [2024-07-24 19:14:02.604188] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.654 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:56.912 malloc0 00:16:56.912 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:57.169 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uzMo9VEw6m 00:16:57.427 [2024-07-24 19:14:03.343135] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:57.427 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.uzMo9VEw6m 00:16:57.427 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.623 Initializing NVMe Controllers 00:17:09.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:09.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:09.623 Initialization complete. Launching workers. 00:17:09.623 ======================================================== 00:17:09.623 Latency(us) 00:17:09.623 Device Information : IOPS MiB/s Average min max 00:17:09.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7507.91 29.33 8527.16 1163.81 10486.02 00:17:09.623 ======================================================== 00:17:09.623 Total : 7507.91 29.33 8527.16 1163.81 10486.02 00:17:09.623 00:17:09.623 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uzMo9VEw6m 00:17:09.623 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:09.623 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:09.623 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:09.623 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uzMo9VEw6m' 00:17:09.623 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:09.623 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2577589 00:17:09.623 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:09.623 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:09.623 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2577589 /var/tmp/bdevperf.sock 00:17:09.623 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2577589 ']' 00:17:09.623 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:09.623 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:09.623 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:09.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:09.623 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:09.623 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:09.623 [2024-07-24 19:14:13.536947] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:17:09.623 [2024-07-24 19:14:13.537040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2577589 ] 00:17:09.623 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.623 [2024-07-24 19:14:13.598529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.623 [2024-07-24 19:14:13.722123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.623 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:09.623 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:09.623 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uzMo9VEw6m 00:17:09.623 [2024-07-24 19:14:14.098794] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:09.623 [2024-07-24 19:14:14.098910] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:09.623 TLSTESTn1 00:17:09.623 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:09.623 Running I/O for 10 seconds... 00:17:19.584 00:17:19.584 Latency(us) 00:17:19.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.584 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:19.584 Verification LBA range: start 0x0 length 0x2000 00:17:19.584 TLSTESTn1 : 10.02 3498.51 13.67 0.00 0.00 36518.47 8592.50 39612.87 00:17:19.584 =================================================================================================================== 00:17:19.584 Total : 3498.51 13.67 0.00 0.00 36518.47 8592.50 39612.87 00:17:19.584 0 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2577589 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2577589 ']' 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2577589 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2577589 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2577589' 00:17:19.584 killing process with pid 2577589 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2577589 00:17:19.584 Received shutdown signal, test time was about 10.000000 seconds 00:17:19.584 00:17:19.584 Latency(us) 00:17:19.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.584 =================================================================================================================== 00:17:19.584 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:19.584 [2024-07-24 19:14:24.384759] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2577589 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5Dwf8nqjR2 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5Dwf8nqjR2 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5Dwf8nqjR2 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5Dwf8nqjR2' 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2578589 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2578589 /var/tmp/bdevperf.sock 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2578589 ']' 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:19.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:19.584 [2024-07-24 19:14:24.653518] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:17:19.584 [2024-07-24 19:14:24.653617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2578589 ] 00:17:19.584 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.584 [2024-07-24 19:14:24.715130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.584 [2024-07-24 19:14:24.824781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:19.584 19:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5Dwf8nqjR2 00:17:19.584 [2024-07-24 19:14:25.191137] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:19.584 [2024-07-24 19:14:25.191270] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:19.584 [2024-07-24 19:14:25.200844] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:19.584 [2024-07-24 19:14:25.200997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e5470 (107): Transport endpoint is not connected 00:17:19.585 [2024-07-24 19:14:25.201996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e5470 (9): Bad file descriptor 00:17:19.585 [2024-07-24 19:14:25.203006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:19.585 [2024-07-24 19:14:25.203024] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:19.585 [2024-07-24 19:14:25.203041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:19.585 request: 00:17:19.585 { 00:17:19.585 "name": "TLSTEST", 00:17:19.585 "trtype": "tcp", 00:17:19.585 "traddr": "10.0.0.2", 00:17:19.585 "adrfam": "ipv4", 00:17:19.585 "trsvcid": "4420", 00:17:19.585 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.585 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:19.585 "prchk_reftag": false, 00:17:19.585 "prchk_guard": false, 00:17:19.585 "hdgst": false, 00:17:19.585 "ddgst": false, 00:17:19.585 "psk": "/tmp/tmp.5Dwf8nqjR2", 00:17:19.585 "method": "bdev_nvme_attach_controller", 00:17:19.585 "req_id": 1 00:17:19.585 } 00:17:19.585 Got JSON-RPC error response 00:17:19.585 response: 00:17:19.585 { 00:17:19.585 "code": -5, 00:17:19.585 "message": "Input/output error" 00:17:19.585 } 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2578589 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2578589 ']' 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2578589 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2578589 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2578589' 00:17:19.585 killing process with pid 2578589 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2578589 00:17:19.585 Received shutdown signal, test time was about 10.000000 seconds 00:17:19.585 00:17:19.585 Latency(us) 00:17:19.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.585 =================================================================================================================== 00:17:19.585 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:19.585 [2024-07-24 19:14:25.250091] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2578589 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uzMo9VEw6m 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uzMo9VEw6m 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uzMo9VEw6m 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uzMo9VEw6m' 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2578685 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2578685 /var/tmp/bdevperf.sock 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2578685 ']' 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:19.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:19.585 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:19.585 [2024-07-24 19:14:25.484355] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:17:19.585 [2024-07-24 19:14:25.484451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2578685 ] 00:17:19.585 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.585 [2024-07-24 19:14:25.541567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.843 [2024-07-24 19:14:25.648168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.843 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:19.843 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:19.843 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.uzMo9VEw6m 00:17:20.110 [2024-07-24 19:14:25.967111] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:20.110 [2024-07-24 19:14:25.967219] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:20.110 [2024-07-24 19:14:25.972671] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:20.110 [2024-07-24 19:14:25.972699] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:20.110 [2024-07-24 19:14:25.972744] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:20.110 [2024-07-24 19:14:25.972901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2b470 (107): Transport endpoint is not connected 00:17:20.110 [2024-07-24 19:14:25.973878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2b470 (9): Bad file descriptor 00:17:20.110 [2024-07-24 19:14:25.974877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:20.110 [2024-07-24 19:14:25.974895] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:20.110 [2024-07-24 19:14:25.974924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:20.110 request: 00:17:20.110 { 00:17:20.110 "name": "TLSTEST", 00:17:20.110 "trtype": "tcp", 00:17:20.110 "traddr": "10.0.0.2", 00:17:20.110 "adrfam": "ipv4", 00:17:20.110 "trsvcid": "4420", 00:17:20.110 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.110 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:20.110 "prchk_reftag": false, 00:17:20.110 "prchk_guard": false, 00:17:20.110 "hdgst": false, 00:17:20.110 "ddgst": false, 00:17:20.110 "psk": "/tmp/tmp.uzMo9VEw6m", 00:17:20.110 "method": "bdev_nvme_attach_controller", 00:17:20.110 "req_id": 1 00:17:20.110 } 00:17:20.110 Got JSON-RPC error response 00:17:20.110 response: 00:17:20.110 { 00:17:20.110 "code": -5, 00:17:20.110 "message": "Input/output error" 00:17:20.110 } 00:17:20.110 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2578685 00:17:20.110 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2578685 ']' 00:17:20.110 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2578685 00:17:20.110 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:20.110 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:20.110 19:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2578685 00:17:20.110 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:20.110 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:20.110 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2578685' 00:17:20.110 killing process with pid 2578685 00:17:20.110 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2578685 00:17:20.110 Received shutdown signal, test time was about 10.000000 seconds 00:17:20.110 00:17:20.111 Latency(us) 00:17:20.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.111 =================================================================================================================== 00:17:20.111 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:20.111 [2024-07-24 19:14:26.010628] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:20.111 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2578685 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uzMo9VEw6m 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uzMo9VEw6m 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uzMo9VEw6m 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uzMo9VEw6m' 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2578710 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2578710 /var/tmp/bdevperf.sock 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2578710 ']' 00:17:20.371 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:20.372 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:20.372 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:20.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:20.372 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:20.372 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:20.372 [2024-07-24 19:14:26.242304] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:17:20.372 [2024-07-24 19:14:26.242395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2578710 ] 00:17:20.372 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.372 [2024-07-24 19:14:26.302023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.629 [2024-07-24 19:14:26.406283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.629 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:20.629 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:20.629 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uzMo9VEw6m 00:17:20.887 [2024-07-24 19:14:26.776841] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:20.887 [2024-07-24 19:14:26.776959] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:20.887 [2024-07-24 19:14:26.784795] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:20.887 [2024-07-24 19:14:26.784835] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:20.887 [2024-07-24 19:14:26.784871] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:20.887 [2024-07-24 19:14:26.785667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2534470 (107): Transport endpoint is not connected 00:17:20.887 [2024-07-24 19:14:26.786665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2534470 (9): Bad file descriptor 00:17:20.887 [2024-07-24 19:14:26.787678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:20.887 [2024-07-24 19:14:26.787695] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:20.887 [2024-07-24 19:14:26.787724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:20.887 request: 00:17:20.887 { 00:17:20.887 "name": "TLSTEST", 00:17:20.887 "trtype": "tcp", 00:17:20.887 "traddr": "10.0.0.2", 00:17:20.887 "adrfam": "ipv4", 00:17:20.887 "trsvcid": "4420", 00:17:20.887 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:20.887 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:20.887 "prchk_reftag": false, 00:17:20.887 "prchk_guard": false, 00:17:20.887 "hdgst": false, 00:17:20.887 "ddgst": false, 00:17:20.887 "psk": "/tmp/tmp.uzMo9VEw6m", 00:17:20.887 "method": "bdev_nvme_attach_controller", 00:17:20.887 "req_id": 1 00:17:20.887 } 00:17:20.887 Got JSON-RPC error response 00:17:20.887 response: 00:17:20.887 { 00:17:20.887 "code": -5, 00:17:20.887 "message": "Input/output error" 00:17:20.887 } 00:17:20.887 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2578710 00:17:20.887 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2578710 ']' 00:17:20.887 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2578710 00:17:20.887 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:20.887 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:20.887 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2578710 00:17:20.887 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:20.887 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:20.887 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2578710' 00:17:20.887 killing process with pid 2578710 00:17:20.887 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2578710 00:17:20.887 Received shutdown signal, test time was about 10.000000 seconds 00:17:20.887 00:17:20.887 Latency(us) 00:17:20.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.887 =================================================================================================================== 00:17:20.887 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:20.887 [2024-07-24 19:14:26.833772] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:20.887 19:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2578710 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2578814 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2578814 /var/tmp/bdevperf.sock 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2578814 ']' 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:21.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:21.145 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:21.145 [2024-07-24 19:14:27.068123] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:17:21.145 [2024-07-24 19:14:27.068216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2578814 ] 00:17:21.145 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.145 [2024-07-24 19:14:27.127037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.402 [2024-07-24 19:14:27.227557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.402 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:21.402 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:21.402 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:21.660 [2024-07-24 19:14:27.596946] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:21.660 [2024-07-24 19:14:27.598743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x632a20 (9): Bad file descriptor 00:17:21.660 [2024-07-24 19:14:27.599744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:21.660 [2024-07-24 19:14:27.599764] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:21.660 [2024-07-24 19:14:27.599781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:21.660 request: 00:17:21.660 { 00:17:21.660 "name": "TLSTEST", 00:17:21.660 "trtype": "tcp", 00:17:21.660 "traddr": "10.0.0.2", 00:17:21.660 "adrfam": "ipv4", 00:17:21.660 "trsvcid": "4420", 00:17:21.660 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.660 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:21.660 "prchk_reftag": false, 00:17:21.660 "prchk_guard": false, 00:17:21.660 "hdgst": false, 00:17:21.660 "ddgst": false, 00:17:21.660 "method": "bdev_nvme_attach_controller", 00:17:21.660 "req_id": 1 00:17:21.660 } 00:17:21.660 Got JSON-RPC error response 00:17:21.660 response: 00:17:21.660 { 00:17:21.660 "code": -5, 00:17:21.660 "message": "Input/output error" 00:17:21.660 } 00:17:21.660 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2578814 00:17:21.660 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2578814 ']' 00:17:21.660 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2578814 00:17:21.660 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:21.660 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:21.660 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2578814 00:17:21.660 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:21.660 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:21.660 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2578814' 00:17:21.660 killing process with pid 2578814 00:17:21.660 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2578814 00:17:21.660 Received shutdown signal, test time was about 10.000000 seconds 00:17:21.660 00:17:21.660 Latency(us) 00:17:21.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.660 =================================================================================================================== 00:17:21.660 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:21.660 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2578814 00:17:21.917 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:21.917 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:21.917 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:21.917 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:21.917 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:21.917 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 2576141 00:17:21.917 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2576141 ']' 00:17:21.917 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2576141 00:17:21.917 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:21.917 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:21.917 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2576141 00:17:21.917 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:21.917 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:21.917 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2576141' 00:17:21.917 killing process with pid 2576141 00:17:21.918 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2576141 00:17:21.918 [2024-07-24 19:14:27.858867] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:21.918 19:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2576141 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.AfjINMTOHw 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.AfjINMTOHw 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2578930 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2578930 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2578930 ']' 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:22.175 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:22.175 [2024-07-24 19:14:28.146517] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:17:22.175 [2024-07-24 19:14:28.146606] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.175 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.432 [2024-07-24 19:14:28.200105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.432 [2024-07-24 19:14:28.293682] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.432 [2024-07-24 19:14:28.293742] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.432 [2024-07-24 19:14:28.293754] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.432 [2024-07-24 19:14:28.293764] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.432 [2024-07-24 19:14:28.293772] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.432 [2024-07-24 19:14:28.293796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.432 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:22.432 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:22.432 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:22.432 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:22.432 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:22.432 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.432 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.AfjINMTOHw 00:17:22.432 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.AfjINMTOHw 00:17:22.432 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:22.689 [2024-07-24 19:14:28.698361] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.947 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:23.204 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:23.488 [2024-07-24 19:14:29.263807] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:23.488 [2024-07-24 19:14:29.264026] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.488 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:23.766 malloc0 00:17:23.766 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:24.031 19:14:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AfjINMTOHw 00:17:24.031 [2024-07-24 19:14:30.003081] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:24.031 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AfjINMTOHw 00:17:24.031 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:24.031 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:24.031 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:24.031 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.AfjINMTOHw' 00:17:24.031 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:24.031 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2579156 00:17:24.031 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:24.031 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2579156 /var/tmp/bdevperf.sock 00:17:24.031 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:24.031 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2579156 ']' 00:17:24.031 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:24.031 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:24.031 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:24.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:24.031 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:24.031 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.288 [2024-07-24 19:14:30.065962] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:17:24.288 [2024-07-24 19:14:30.066060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2579156 ] 00:17:24.288 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.288 [2024-07-24 19:14:30.122130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.288 [2024-07-24 19:14:30.220406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.546 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:24.546 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:24.546 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AfjINMTOHw 00:17:24.804 [2024-07-24 19:14:30.582985] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:24.804 [2024-07-24 19:14:30.583103] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:24.804 TLSTESTn1 00:17:24.804 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:24.804 Running I/O for 10 seconds... 00:17:36.992 00:17:36.992 Latency(us) 00:17:36.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.992 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:36.992 Verification LBA range: start 0x0 length 0x2000 00:17:36.992 TLSTESTn1 : 10.03 3358.86 13.12 0.00 0.00 38026.17 5971.06 65633.09 00:17:36.992 =================================================================================================================== 00:17:36.992 Total : 3358.86 13.12 0.00 0.00 38026.17 5971.06 65633.09 00:17:36.992 0 00:17:36.992 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:36.992 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2579156 00:17:36.992 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2579156 ']' 00:17:36.992 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2579156 00:17:36.992 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:36.992 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:36.992 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2579156 00:17:36.992 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:36.992 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:36.992 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2579156' 00:17:36.992 killing process with pid 2579156 00:17:36.992 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2579156 00:17:36.992 Received shutdown signal, test time was about 10.000000 seconds 00:17:36.992 00:17:36.992 Latency(us) 00:17:36.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.992 =================================================================================================================== 00:17:36.992 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:36.992 [2024-07-24 19:14:40.880628] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:36.992 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2579156 00:17:36.992 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.AfjINMTOHw 00:17:36.992 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AfjINMTOHw 00:17:36.992 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:36.992 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AfjINMTOHw 00:17:36.992 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:36.992 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.992 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:36.992 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.992 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AfjINMTOHw 00:17:36.992 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:36.992 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:36.992 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:36.992 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.AfjINMTOHw' 00:17:36.992 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:36.992 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2580154 00:17:36.992 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:36.992 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:36.992 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2580154 /var/tmp/bdevperf.sock 00:17:36.992 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2580154 ']' 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:36.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.993 [2024-07-24 19:14:41.151602] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:17:36.993 [2024-07-24 19:14:41.151698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2580154 ] 00:17:36.993 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.993 [2024-07-24 19:14:41.212965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.993 [2024-07-24 19:14:41.332895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AfjINMTOHw 00:17:36.993 [2024-07-24 19:14:41.709411] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:36.993 [2024-07-24 19:14:41.709498] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:36.993 [2024-07-24 19:14:41.709514] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.AfjINMTOHw 00:17:36.993 request: 00:17:36.993 { 00:17:36.993 "name": "TLSTEST", 00:17:36.993 "trtype": "tcp", 00:17:36.993 "traddr": "10.0.0.2", 00:17:36.993 "adrfam": "ipv4", 00:17:36.993 "trsvcid": "4420", 00:17:36.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:36.993 "prchk_reftag": false, 00:17:36.993 "prchk_guard": false, 00:17:36.993 "hdgst": false, 00:17:36.993 "ddgst": false, 00:17:36.993 "psk": "/tmp/tmp.AfjINMTOHw", 00:17:36.993 "method": "bdev_nvme_attach_controller", 00:17:36.993 "req_id": 1 00:17:36.993 } 00:17:36.993 Got JSON-RPC error response 00:17:36.993 response: 00:17:36.993 { 00:17:36.993 "code": -1, 00:17:36.993 "message": "Operation not permitted" 00:17:36.993 } 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2580154 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2580154 ']' 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2580154 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2580154 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2580154' 00:17:36.993 killing process with pid 2580154 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2580154 00:17:36.993 Received shutdown signal, test time was about 10.000000 seconds 00:17:36.993 00:17:36.993 Latency(us) 00:17:36.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.993 =================================================================================================================== 00:17:36.993 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2580154 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 2578930 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2578930 ']' 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2578930 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2578930 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2578930' 00:17:36.993 killing process with pid 2578930 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2578930 00:17:36.993 [2024-07-24 19:14:41.957591] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:36.993 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2578930 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2580266 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2580266 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2580266 ']' 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.993 [2024-07-24 19:14:42.192467] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:17:36.993 [2024-07-24 19:14:42.192565] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.993 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.993 [2024-07-24 19:14:42.248570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.993 [2024-07-24 19:14:42.351876] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.993 [2024-07-24 19:14:42.351937] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.993 [2024-07-24 19:14:42.351963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.993 [2024-07-24 19:14:42.351975] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.993 [2024-07-24 19:14:42.351985] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.993 [2024-07-24 19:14:42.352013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.AfjINMTOHw 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.AfjINMTOHw 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:17:36.993 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.994 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:17:36.994 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.994 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.AfjINMTOHw 00:17:36.994 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.AfjINMTOHw 00:17:36.994 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:36.994 [2024-07-24 19:14:42.751536] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.994 19:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:37.251 19:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:37.507 [2024-07-24 19:14:43.345108] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:37.507 [2024-07-24 19:14:43.345354] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.507 19:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:37.764 malloc0 00:17:37.765 19:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:38.022 19:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AfjINMTOHw 00:17:38.281 [2024-07-24 19:14:44.189994] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:38.281 [2024-07-24 19:14:44.190033] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:38.281 [2024-07-24 19:14:44.190073] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:38.281 request: 00:17:38.281 { 00:17:38.281 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.281 "host": "nqn.2016-06.io.spdk:host1", 00:17:38.281 "psk": "/tmp/tmp.AfjINMTOHw", 00:17:38.281 "method": "nvmf_subsystem_add_host", 00:17:38.281 "req_id": 1 00:17:38.281 } 00:17:38.281 Got JSON-RPC error response 00:17:38.281 response: 00:17:38.281 { 00:17:38.281 "code": -32603, 00:17:38.281 "message": "Internal error" 00:17:38.281 } 00:17:38.281 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:38.281 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:38.281 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:38.281 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:38.281 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 2580266 00:17:38.281 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2580266 ']' 00:17:38.281 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2580266 00:17:38.281 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:38.281 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:38.281 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2580266 00:17:38.281 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:38.281 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:38.281 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2580266' 00:17:38.281 killing process with pid 2580266 00:17:38.281 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2580266 00:17:38.281 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2580266 00:17:38.540 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.AfjINMTOHw 00:17:38.540 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:38.540 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:38.540 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:38.540 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.540 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2580493 00:17:38.540 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:38.540 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2580493 00:17:38.540 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2580493 ']' 00:17:38.540 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.540 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:38.540 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.540 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:38.540 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.540 [2024-07-24 19:14:44.524575] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:17:38.540 [2024-07-24 19:14:44.524671] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.798 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.798 [2024-07-24 19:14:44.589016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.798 [2024-07-24 19:14:44.704736] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.798 [2024-07-24 19:14:44.704794] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.798 [2024-07-24 19:14:44.704810] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.798 [2024-07-24 19:14:44.704823] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.798 [2024-07-24 19:14:44.704835] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.798 [2024-07-24 19:14:44.704864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.731 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:39.731 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:39.731 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:39.731 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:39.731 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:39.731 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.731 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.AfjINMTOHw 00:17:39.731 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.AfjINMTOHw 00:17:39.731 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:39.988 [2024-07-24 19:14:45.815668] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.988 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:40.245 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:40.502 [2024-07-24 19:14:46.413222] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:40.502 [2024-07-24 19:14:46.413438] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.502 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:40.759 malloc0 00:17:40.759 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:41.017 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AfjINMTOHw 00:17:41.274 [2024-07-24 19:14:47.140302] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:41.275 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2580729 00:17:41.275 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:41.275 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2580729 /var/tmp/bdevperf.sock 00:17:41.275 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2580729 ']' 00:17:41.275 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:41.275 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:41.275 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:41.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:41.275 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:41.275 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:41.275 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:41.275 [2024-07-24 19:14:47.202989] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:17:41.275 [2024-07-24 19:14:47.203071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2580729 ] 00:17:41.275 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.275 [2024-07-24 19:14:47.251379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.532 [2024-07-24 19:14:47.351719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.532 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:41.532 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:41.532 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AfjINMTOHw 00:17:41.791 [2024-07-24 19:14:47.666994] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:41.791 [2024-07-24 19:14:47.667097] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:41.791 TLSTESTn1 00:17:41.791 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:42.357 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:17:42.357 "subsystems": [ 00:17:42.357 { 00:17:42.357 "subsystem": "keyring", 00:17:42.357 "config": [] 00:17:42.357 }, 00:17:42.357 { 00:17:42.357 "subsystem": "iobuf", 00:17:42.357 "config": [ 00:17:42.357 { 00:17:42.357 "method": "iobuf_set_options", 00:17:42.357 "params": { 00:17:42.357 "small_pool_count": 8192, 00:17:42.357 "large_pool_count": 1024, 00:17:42.357 "small_bufsize": 8192, 00:17:42.357 "large_bufsize": 135168 00:17:42.357 } 00:17:42.357 } 00:17:42.357 ] 00:17:42.357 }, 00:17:42.357 { 00:17:42.357 "subsystem": "sock", 00:17:42.357 "config": [ 00:17:42.357 { 00:17:42.357 "method": "sock_set_default_impl", 00:17:42.357 "params": { 00:17:42.357 "impl_name": "posix" 00:17:42.357 } 00:17:42.357 }, 00:17:42.357 { 00:17:42.357 "method": "sock_impl_set_options", 00:17:42.357 "params": { 00:17:42.357 "impl_name": "ssl", 00:17:42.357 "recv_buf_size": 4096, 00:17:42.357 "send_buf_size": 4096, 00:17:42.357 "enable_recv_pipe": true, 00:17:42.357 "enable_quickack": false, 00:17:42.357 "enable_placement_id": 0, 00:17:42.357 "enable_zerocopy_send_server": true, 00:17:42.357 "enable_zerocopy_send_client": false, 00:17:42.357 "zerocopy_threshold": 0, 00:17:42.357 "tls_version": 0, 00:17:42.357 "enable_ktls": false 00:17:42.357 } 00:17:42.357 }, 00:17:42.357 { 00:17:42.357 "method": "sock_impl_set_options", 00:17:42.357 "params": { 00:17:42.357 "impl_name": "posix", 00:17:42.357 "recv_buf_size": 2097152, 00:17:42.357 "send_buf_size": 2097152, 00:17:42.357 "enable_recv_pipe": true, 00:17:42.357 "enable_quickack": false, 00:17:42.357 "enable_placement_id": 0, 00:17:42.357 "enable_zerocopy_send_server": true, 00:17:42.357 "enable_zerocopy_send_client": false, 00:17:42.357 "zerocopy_threshold": 0, 00:17:42.357 "tls_version": 0, 00:17:42.357 "enable_ktls": false 00:17:42.357 } 00:17:42.357 } 00:17:42.357 ] 00:17:42.357 }, 00:17:42.357 { 00:17:42.357 "subsystem": "vmd", 00:17:42.357 "config": [] 00:17:42.357 }, 00:17:42.357 { 00:17:42.357 "subsystem": "accel", 00:17:42.357 "config": [ 00:17:42.357 { 00:17:42.357 "method": "accel_set_options", 00:17:42.357 "params": { 00:17:42.357 "small_cache_size": 128, 00:17:42.357 "large_cache_size": 16, 00:17:42.357 "task_count": 2048, 00:17:42.357 "sequence_count": 2048, 00:17:42.357 "buf_count": 2048 00:17:42.357 } 00:17:42.357 } 00:17:42.357 ] 00:17:42.357 }, 00:17:42.357 { 00:17:42.357 "subsystem": "bdev", 00:17:42.357 "config": [ 00:17:42.357 { 00:17:42.357 "method": "bdev_set_options", 00:17:42.357 "params": { 00:17:42.357 "bdev_io_pool_size": 65535, 00:17:42.357 "bdev_io_cache_size": 256, 00:17:42.357 "bdev_auto_examine": true, 00:17:42.357 "iobuf_small_cache_size": 128, 00:17:42.357 "iobuf_large_cache_size": 16 00:17:42.357 } 00:17:42.357 }, 00:17:42.357 { 00:17:42.357 "method": "bdev_raid_set_options", 00:17:42.357 "params": { 00:17:42.357 "process_window_size_kb": 1024, 00:17:42.357 "process_max_bandwidth_mb_sec": 0 00:17:42.357 } 00:17:42.357 }, 00:17:42.357 { 00:17:42.357 "method": "bdev_iscsi_set_options", 00:17:42.357 "params": { 00:17:42.357 "timeout_sec": 30 00:17:42.357 } 00:17:42.357 }, 00:17:42.357 { 00:17:42.357 "method": "bdev_nvme_set_options", 00:17:42.357 "params": { 00:17:42.357 "action_on_timeout": "none", 00:17:42.357 "timeout_us": 0, 00:17:42.357 "timeout_admin_us": 0, 00:17:42.357 "keep_alive_timeout_ms": 10000, 00:17:42.357 "arbitration_burst": 0, 00:17:42.357 "low_priority_weight": 0, 00:17:42.357 "medium_priority_weight": 0, 00:17:42.357 "high_priority_weight": 0, 00:17:42.357 "nvme_adminq_poll_period_us": 10000, 00:17:42.357 "nvme_ioq_poll_period_us": 0, 00:17:42.357 "io_queue_requests": 0, 00:17:42.357 "delay_cmd_submit": true, 00:17:42.357 "transport_retry_count": 4, 00:17:42.357 "bdev_retry_count": 3, 00:17:42.357 "transport_ack_timeout": 0, 00:17:42.357 "ctrlr_loss_timeout_sec": 0, 00:17:42.357 "reconnect_delay_sec": 0, 00:17:42.357 "fast_io_fail_timeout_sec": 0, 00:17:42.357 "disable_auto_failback": false, 00:17:42.357 "generate_uuids": false, 00:17:42.357 "transport_tos": 0, 00:17:42.357 "nvme_error_stat": false, 00:17:42.357 "rdma_srq_size": 0, 00:17:42.357 "io_path_stat": false, 00:17:42.357 "allow_accel_sequence": false, 00:17:42.357 "rdma_max_cq_size": 0, 00:17:42.357 "rdma_cm_event_timeout_ms": 0, 00:17:42.357 "dhchap_digests": [ 00:17:42.357 "sha256", 00:17:42.357 "sha384", 00:17:42.357 "sha512" 00:17:42.357 ], 00:17:42.357 "dhchap_dhgroups": [ 00:17:42.357 "null", 00:17:42.357 "ffdhe2048", 00:17:42.357 "ffdhe3072", 00:17:42.357 "ffdhe4096", 00:17:42.357 "ffdhe6144", 00:17:42.357 "ffdhe8192" 00:17:42.357 ] 00:17:42.357 } 00:17:42.357 }, 00:17:42.357 { 00:17:42.357 "method": "bdev_nvme_set_hotplug", 00:17:42.357 "params": { 00:17:42.357 "period_us": 100000, 00:17:42.357 "enable": false 00:17:42.357 } 00:17:42.357 }, 00:17:42.357 { 00:17:42.357 "method": "bdev_malloc_create", 00:17:42.357 "params": { 00:17:42.357 "name": "malloc0", 00:17:42.357 "num_blocks": 8192, 00:17:42.357 "block_size": 4096, 00:17:42.357 "physical_block_size": 4096, 00:17:42.357 "uuid": "dd961967-d101-497d-b7fd-ac35b34d5d35", 00:17:42.357 "optimal_io_boundary": 0, 00:17:42.357 "md_size": 0, 00:17:42.357 "dif_type": 0, 00:17:42.357 "dif_is_head_of_md": false, 00:17:42.357 "dif_pi_format": 0 00:17:42.357 } 00:17:42.357 }, 00:17:42.357 { 00:17:42.357 "method": "bdev_wait_for_examine" 00:17:42.357 } 00:17:42.357 ] 00:17:42.357 }, 00:17:42.357 { 00:17:42.357 "subsystem": "nbd", 00:17:42.357 "config": [] 00:17:42.357 }, 00:17:42.357 { 00:17:42.357 "subsystem": "scheduler", 00:17:42.357 "config": [ 00:17:42.357 { 00:17:42.357 "method": "framework_set_scheduler", 00:17:42.357 "params": { 00:17:42.357 "name": "static" 00:17:42.357 } 00:17:42.357 } 00:17:42.357 ] 00:17:42.357 }, 00:17:42.357 { 00:17:42.357 "subsystem": "nvmf", 00:17:42.357 "config": [ 00:17:42.357 { 00:17:42.357 "method": "nvmf_set_config", 00:17:42.358 "params": { 00:17:42.358 "discovery_filter": "match_any", 00:17:42.358 "admin_cmd_passthru": { 00:17:42.358 "identify_ctrlr": false 00:17:42.358 } 00:17:42.358 } 00:17:42.358 }, 00:17:42.358 { 00:17:42.358 "method": "nvmf_set_max_subsystems", 00:17:42.358 "params": { 00:17:42.358 "max_subsystems": 1024 00:17:42.358 } 00:17:42.358 }, 00:17:42.358 { 00:17:42.358 "method": "nvmf_set_crdt", 00:17:42.358 "params": { 00:17:42.358 "crdt1": 0, 00:17:42.358 "crdt2": 0, 00:17:42.358 "crdt3": 0 00:17:42.358 } 00:17:42.358 }, 00:17:42.358 { 00:17:42.358 "method": "nvmf_create_transport", 00:17:42.358 "params": { 00:17:42.358 "trtype": "TCP", 00:17:42.358 "max_queue_depth": 128, 00:17:42.358 "max_io_qpairs_per_ctrlr": 127, 00:17:42.358 "in_capsule_data_size": 4096, 00:17:42.358 "max_io_size": 131072, 00:17:42.358 "io_unit_size": 131072, 00:17:42.358 "max_aq_depth": 128, 00:17:42.358 "num_shared_buffers": 511, 00:17:42.358 "buf_cache_size": 4294967295, 00:17:42.358 "dif_insert_or_strip": false, 00:17:42.358 "zcopy": false, 00:17:42.358 "c2h_success": false, 00:17:42.358 "sock_priority": 0, 00:17:42.358 "abort_timeout_sec": 1, 00:17:42.358 "ack_timeout": 0, 00:17:42.358 "data_wr_pool_size": 0 00:17:42.358 } 00:17:42.358 }, 00:17:42.358 { 00:17:42.358 "method": "nvmf_create_subsystem", 00:17:42.358 "params": { 00:17:42.358 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.358 "allow_any_host": false, 00:17:42.358 "serial_number": "SPDK00000000000001", 00:17:42.358 "model_number": "SPDK bdev Controller", 00:17:42.358 "max_namespaces": 10, 00:17:42.358 "min_cntlid": 1, 00:17:42.358 "max_cntlid": 65519, 00:17:42.358 "ana_reporting": false 00:17:42.358 } 00:17:42.358 }, 00:17:42.358 { 00:17:42.358 "method": "nvmf_subsystem_add_host", 00:17:42.358 "params": { 00:17:42.358 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.358 "host": "nqn.2016-06.io.spdk:host1", 00:17:42.358 "psk": "/tmp/tmp.AfjINMTOHw" 00:17:42.358 } 00:17:42.358 }, 00:17:42.358 { 00:17:42.358 "method": "nvmf_subsystem_add_ns", 00:17:42.358 "params": { 00:17:42.358 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.358 "namespace": { 00:17:42.358 "nsid": 1, 00:17:42.358 "bdev_name": "malloc0", 00:17:42.358 "nguid": "DD961967D101497DB7FDAC35B34D5D35", 00:17:42.358 "uuid": "dd961967-d101-497d-b7fd-ac35b34d5d35", 00:17:42.358 "no_auto_visible": false 00:17:42.358 } 00:17:42.358 } 00:17:42.358 }, 00:17:42.358 { 00:17:42.358 "method": "nvmf_subsystem_add_listener", 00:17:42.358 "params": { 00:17:42.358 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.358 "listen_address": { 00:17:42.358 "trtype": "TCP", 00:17:42.358 "adrfam": "IPv4", 00:17:42.358 "traddr": "10.0.0.2", 00:17:42.358 "trsvcid": "4420" 00:17:42.358 }, 00:17:42.358 "secure_channel": true 00:17:42.358 } 00:17:42.358 } 00:17:42.358 ] 00:17:42.358 } 00:17:42.358 ] 00:17:42.358 }' 00:17:42.358 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:42.617 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:17:42.617 "subsystems": [ 00:17:42.617 { 00:17:42.617 "subsystem": "keyring", 00:17:42.617 "config": [] 00:17:42.617 }, 00:17:42.617 { 00:17:42.617 "subsystem": "iobuf", 00:17:42.617 "config": [ 00:17:42.617 { 00:17:42.617 "method": "iobuf_set_options", 00:17:42.617 "params": { 00:17:42.617 "small_pool_count": 8192, 00:17:42.617 "large_pool_count": 1024, 00:17:42.617 "small_bufsize": 8192, 00:17:42.617 "large_bufsize": 135168 00:17:42.617 } 00:17:42.617 } 00:17:42.617 ] 00:17:42.617 }, 00:17:42.617 { 00:17:42.617 "subsystem": "sock", 00:17:42.617 "config": [ 00:17:42.617 { 00:17:42.617 "method": "sock_set_default_impl", 00:17:42.617 "params": { 00:17:42.617 "impl_name": "posix" 00:17:42.617 } 00:17:42.617 }, 00:17:42.617 { 00:17:42.617 "method": "sock_impl_set_options", 00:17:42.617 "params": { 00:17:42.617 "impl_name": "ssl", 00:17:42.617 "recv_buf_size": 4096, 00:17:42.617 "send_buf_size": 4096, 00:17:42.617 "enable_recv_pipe": true, 00:17:42.617 "enable_quickack": false, 00:17:42.617 "enable_placement_id": 0, 00:17:42.617 "enable_zerocopy_send_server": true, 00:17:42.617 "enable_zerocopy_send_client": false, 00:17:42.617 "zerocopy_threshold": 0, 00:17:42.617 "tls_version": 0, 00:17:42.617 "enable_ktls": false 00:17:42.617 } 00:17:42.617 }, 00:17:42.617 { 00:17:42.617 "method": "sock_impl_set_options", 00:17:42.617 "params": { 00:17:42.617 "impl_name": "posix", 00:17:42.617 "recv_buf_size": 2097152, 00:17:42.617 "send_buf_size": 2097152, 00:17:42.617 "enable_recv_pipe": true, 00:17:42.617 "enable_quickack": false, 00:17:42.617 "enable_placement_id": 0, 00:17:42.617 "enable_zerocopy_send_server": true, 00:17:42.617 "enable_zerocopy_send_client": false, 00:17:42.617 "zerocopy_threshold": 0, 00:17:42.617 "tls_version": 0, 00:17:42.617 "enable_ktls": false 00:17:42.617 } 00:17:42.617 } 00:17:42.617 ] 00:17:42.617 }, 00:17:42.617 { 00:17:42.617 "subsystem": "vmd", 00:17:42.617 "config": [] 00:17:42.617 }, 00:17:42.617 { 00:17:42.617 "subsystem": "accel", 00:17:42.617 "config": [ 00:17:42.617 { 00:17:42.617 "method": "accel_set_options", 00:17:42.617 "params": { 00:17:42.617 "small_cache_size": 128, 00:17:42.617 "large_cache_size": 16, 00:17:42.617 "task_count": 2048, 00:17:42.617 "sequence_count": 2048, 00:17:42.617 "buf_count": 2048 00:17:42.617 } 00:17:42.617 } 00:17:42.617 ] 00:17:42.617 }, 00:17:42.617 { 00:17:42.617 "subsystem": "bdev", 00:17:42.617 "config": [ 00:17:42.617 { 00:17:42.617 "method": "bdev_set_options", 00:17:42.617 "params": { 00:17:42.617 "bdev_io_pool_size": 65535, 00:17:42.617 "bdev_io_cache_size": 256, 00:17:42.617 "bdev_auto_examine": true, 00:17:42.617 "iobuf_small_cache_size": 128, 00:17:42.617 "iobuf_large_cache_size": 16 00:17:42.617 } 00:17:42.617 }, 00:17:42.617 { 00:17:42.617 "method": "bdev_raid_set_options", 00:17:42.617 "params": { 00:17:42.617 "process_window_size_kb": 1024, 00:17:42.617 "process_max_bandwidth_mb_sec": 0 00:17:42.617 } 00:17:42.617 }, 00:17:42.617 { 00:17:42.617 "method": "bdev_iscsi_set_options", 00:17:42.617 "params": { 00:17:42.617 "timeout_sec": 30 00:17:42.617 } 00:17:42.617 }, 00:17:42.617 { 00:17:42.617 "method": "bdev_nvme_set_options", 00:17:42.617 "params": { 00:17:42.617 "action_on_timeout": "none", 00:17:42.617 "timeout_us": 0, 00:17:42.617 "timeout_admin_us": 0, 00:17:42.617 "keep_alive_timeout_ms": 10000, 00:17:42.617 "arbitration_burst": 0, 00:17:42.617 "low_priority_weight": 0, 00:17:42.617 "medium_priority_weight": 0, 00:17:42.617 "high_priority_weight": 0, 00:17:42.617 "nvme_adminq_poll_period_us": 10000, 00:17:42.617 "nvme_ioq_poll_period_us": 0, 00:17:42.617 "io_queue_requests": 512, 00:17:42.617 "delay_cmd_submit": true, 00:17:42.617 "transport_retry_count": 4, 00:17:42.617 "bdev_retry_count": 3, 00:17:42.617 "transport_ack_timeout": 0, 00:17:42.617 "ctrlr_loss_timeout_sec": 0, 00:17:42.617 "reconnect_delay_sec": 0, 00:17:42.617 "fast_io_fail_timeout_sec": 0, 00:17:42.617 "disable_auto_failback": false, 00:17:42.617 "generate_uuids": false, 00:17:42.617 "transport_tos": 0, 00:17:42.617 "nvme_error_stat": false, 00:17:42.617 "rdma_srq_size": 0, 00:17:42.617 "io_path_stat": false, 00:17:42.617 "allow_accel_sequence": false, 00:17:42.617 "rdma_max_cq_size": 0, 00:17:42.617 "rdma_cm_event_timeout_ms": 0, 00:17:42.617 "dhchap_digests": [ 00:17:42.617 "sha256", 00:17:42.617 "sha384", 00:17:42.617 "sha512" 00:17:42.617 ], 00:17:42.617 "dhchap_dhgroups": [ 00:17:42.617 "null", 00:17:42.617 "ffdhe2048", 00:17:42.617 "ffdhe3072", 00:17:42.617 "ffdhe4096", 00:17:42.617 "ffdhe6144", 00:17:42.617 "ffdhe8192" 00:17:42.617 ] 00:17:42.617 } 00:17:42.617 }, 00:17:42.617 { 00:17:42.617 "method": "bdev_nvme_attach_controller", 00:17:42.617 "params": { 00:17:42.617 "name": "TLSTEST", 00:17:42.617 "trtype": "TCP", 00:17:42.617 "adrfam": "IPv4", 00:17:42.617 "traddr": "10.0.0.2", 00:17:42.617 "trsvcid": "4420", 00:17:42.617 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.617 "prchk_reftag": false, 00:17:42.617 "prchk_guard": false, 00:17:42.617 "ctrlr_loss_timeout_sec": 0, 00:17:42.617 "reconnect_delay_sec": 0, 00:17:42.617 "fast_io_fail_timeout_sec": 0, 00:17:42.617 "psk": "/tmp/tmp.AfjINMTOHw", 00:17:42.617 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:42.617 "hdgst": false, 00:17:42.617 "ddgst": false 00:17:42.617 } 00:17:42.617 }, 00:17:42.617 { 00:17:42.617 "method": "bdev_nvme_set_hotplug", 00:17:42.617 "params": { 00:17:42.617 "period_us": 100000, 00:17:42.617 "enable": false 00:17:42.617 } 00:17:42.617 }, 00:17:42.617 { 00:17:42.617 "method": "bdev_wait_for_examine" 00:17:42.617 } 00:17:42.617 ] 00:17:42.617 }, 00:17:42.617 { 00:17:42.617 "subsystem": "nbd", 00:17:42.617 "config": [] 00:17:42.617 } 00:17:42.617 ] 00:17:42.617 }' 00:17:42.617 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 2580729 00:17:42.618 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2580729 ']' 00:17:42.618 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2580729 00:17:42.618 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:42.618 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:42.618 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2580729 00:17:42.618 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:42.618 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:42.618 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2580729' 00:17:42.618 killing process with pid 2580729 00:17:42.618 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2580729 00:17:42.618 Received shutdown signal, test time was about 10.000000 seconds 00:17:42.618 00:17:42.618 Latency(us) 00:17:42.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.618 =================================================================================================================== 00:17:42.618 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:42.618 [2024-07-24 19:14:48.401616] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:42.618 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2580729 00:17:42.618 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 2580493 00:17:42.618 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2580493 ']' 00:17:42.618 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2580493 00:17:42.618 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:42.618 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:42.618 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2580493 00:17:42.618 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:42.618 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:42.618 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2580493' 00:17:42.618 killing process with pid 2580493 00:17:42.618 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2580493 00:17:42.618 [2024-07-24 19:14:48.617891] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:42.618 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2580493 00:17:42.878 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:42.878 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:42.878 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:42.878 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:17:42.878 "subsystems": [ 00:17:42.878 { 00:17:42.878 "subsystem": "keyring", 00:17:42.878 "config": [] 00:17:42.878 }, 00:17:42.878 { 00:17:42.878 "subsystem": "iobuf", 00:17:42.878 "config": [ 00:17:42.878 { 00:17:42.878 "method": "iobuf_set_options", 00:17:42.878 "params": { 00:17:42.878 "small_pool_count": 8192, 00:17:42.878 "large_pool_count": 1024, 00:17:42.878 "small_bufsize": 8192, 00:17:42.878 "large_bufsize": 135168 00:17:42.878 } 00:17:42.878 } 00:17:42.878 ] 00:17:42.878 }, 00:17:42.878 { 00:17:42.878 "subsystem": "sock", 00:17:42.878 "config": [ 00:17:42.878 { 00:17:42.878 "method": "sock_set_default_impl", 00:17:42.878 "params": { 00:17:42.878 "impl_name": "posix" 00:17:42.878 } 00:17:42.878 }, 00:17:42.878 { 00:17:42.878 "method": "sock_impl_set_options", 00:17:42.878 "params": { 00:17:42.878 "impl_name": "ssl", 00:17:42.878 "recv_buf_size": 4096, 00:17:42.878 "send_buf_size": 4096, 00:17:42.878 "enable_recv_pipe": true, 00:17:42.878 "enable_quickack": false, 00:17:42.878 "enable_placement_id": 0, 00:17:42.878 "enable_zerocopy_send_server": true, 00:17:42.878 "enable_zerocopy_send_client": false, 00:17:42.878 "zerocopy_threshold": 0, 00:17:42.878 "tls_version": 0, 00:17:42.878 "enable_ktls": false 00:17:42.878 } 00:17:42.878 }, 00:17:42.878 { 00:17:42.878 "method": "sock_impl_set_options", 00:17:42.878 "params": { 00:17:42.878 "impl_name": "posix", 00:17:42.878 "recv_buf_size": 2097152, 00:17:42.878 "send_buf_size": 2097152, 00:17:42.878 "enable_recv_pipe": true, 00:17:42.878 "enable_quickack": false, 00:17:42.878 "enable_placement_id": 0, 00:17:42.878 "enable_zerocopy_send_server": true, 00:17:42.878 "enable_zerocopy_send_client": false, 00:17:42.878 "zerocopy_threshold": 0, 00:17:42.878 "tls_version": 0, 00:17:42.878 "enable_ktls": false 00:17:42.878 } 00:17:42.878 } 00:17:42.878 ] 00:17:42.878 }, 00:17:42.878 { 00:17:42.878 "subsystem": "vmd", 00:17:42.878 "config": [] 00:17:42.878 }, 00:17:42.878 { 00:17:42.878 "subsystem": "accel", 00:17:42.878 "config": [ 00:17:42.878 { 00:17:42.878 "method": "accel_set_options", 00:17:42.878 "params": { 00:17:42.878 "small_cache_size": 128, 00:17:42.878 "large_cache_size": 16, 00:17:42.878 "task_count": 2048, 00:17:42.878 "sequence_count": 2048, 00:17:42.878 "buf_count": 2048 00:17:42.878 } 00:17:42.878 } 00:17:42.878 ] 00:17:42.878 }, 00:17:42.878 { 00:17:42.878 "subsystem": "bdev", 00:17:42.878 "config": [ 00:17:42.878 { 00:17:42.878 "method": "bdev_set_options", 00:17:42.878 "params": { 00:17:42.878 "bdev_io_pool_size": 65535, 00:17:42.878 "bdev_io_cache_size": 256, 00:17:42.878 "bdev_auto_examine": true, 00:17:42.878 "iobuf_small_cache_size": 128, 00:17:42.878 "iobuf_large_cache_size": 16 00:17:42.878 } 00:17:42.878 }, 00:17:42.878 { 00:17:42.878 "method": "bdev_raid_set_options", 00:17:42.878 "params": { 00:17:42.878 "process_window_size_kb": 1024, 00:17:42.878 "process_max_bandwidth_mb_sec": 0 00:17:42.878 } 00:17:42.878 }, 00:17:42.878 { 00:17:42.878 "method": "bdev_iscsi_set_options", 00:17:42.878 "params": { 00:17:42.878 "timeout_sec": 30 00:17:42.878 } 00:17:42.878 }, 00:17:42.878 { 00:17:42.878 "method": "bdev_nvme_set_options", 00:17:42.878 "params": { 00:17:42.879 "action_on_timeout": "none", 00:17:42.879 "timeout_us": 0, 00:17:42.879 "timeout_admin_us": 0, 00:17:42.879 "keep_alive_timeout_ms": 10000, 00:17:42.879 "arbitration_burst": 0, 00:17:42.879 "low_priority_weight": 0, 00:17:42.879 "medium_priority_weight": 0, 00:17:42.879 "high_priority_weight": 0, 00:17:42.879 "nvme_adminq_poll_period_us": 10000, 00:17:42.879 "nvme_ioq_poll_period_us": 0, 00:17:42.879 "io_queue_requests": 0, 00:17:42.879 "delay_cmd_submit": true, 00:17:42.879 "transport_retry_count": 4, 00:17:42.879 "bdev_retry_count": 3, 00:17:42.879 "transport_ack_timeout": 0, 00:17:42.879 "ctrlr_loss_timeout_sec": 0, 00:17:42.879 "reconnect_delay_sec": 0, 00:17:42.879 "fast_io_fail_timeout_sec": 0, 00:17:42.879 "disable_auto_failback": false, 00:17:42.879 "generate_uuids": false, 00:17:42.879 "transport_tos": 0, 00:17:42.879 "nvme_error_stat": false, 00:17:42.879 "rdma_srq_size": 0, 00:17:42.879 "io_path_stat": false, 00:17:42.879 "allow_accel_sequence": false, 00:17:42.879 "rdma_max_cq_size": 0, 00:17:42.879 "rdma_cm_event_timeout_ms": 0, 00:17:42.879 "dhchap_digests": [ 00:17:42.879 "sha256", 00:17:42.879 "sha384", 00:17:42.879 "sha512" 00:17:42.879 ], 00:17:42.879 "dhchap_dhgroups": [ 00:17:42.879 "null", 00:17:42.879 "ffdhe2048", 00:17:42.879 "ffdhe3072", 00:17:42.879 "ffdhe4096", 00:17:42.879 "ffdhe6144", 00:17:42.879 "ffdhe8192" 00:17:42.879 ] 00:17:42.879 } 00:17:42.879 }, 00:17:42.879 { 00:17:42.879 "method": "bdev_nvme_set_hotplug", 00:17:42.879 "params": { 00:17:42.879 "period_us": 100000, 00:17:42.879 "enable": false 00:17:42.879 } 00:17:42.879 }, 00:17:42.879 { 00:17:42.879 "method": "bdev_malloc_create", 00:17:42.879 "params": { 00:17:42.879 "name": "malloc0", 00:17:42.879 "num_blocks": 8192, 00:17:42.879 "block_size": 4096, 00:17:42.879 "physical_block_size": 4096, 00:17:42.879 "uuid": "dd961967-d101-497d-b7fd-ac35b34d5d35", 00:17:42.879 "optimal_io_boundary": 0, 00:17:42.879 "md_size": 0, 00:17:42.879 "dif_type": 0, 00:17:42.879 "dif_is_head_of_md": false, 00:17:42.879 "dif_pi_format": 0 00:17:42.879 } 00:17:42.879 }, 00:17:42.879 { 00:17:42.879 "method": "bdev_wait_for_examine" 00:17:42.879 } 00:17:42.879 ] 00:17:42.879 }, 00:17:42.879 { 00:17:42.879 "subsystem": "nbd", 00:17:42.879 "config": [] 00:17:42.879 }, 00:17:42.879 { 00:17:42.879 "subsystem": "scheduler", 00:17:42.879 "config": [ 00:17:42.879 { 00:17:42.879 "method": "framework_set_scheduler", 00:17:42.879 "params": { 00:17:42.879 "name": "static" 00:17:42.879 } 00:17:42.879 } 00:17:42.879 ] 00:17:42.879 }, 00:17:42.879 { 00:17:42.879 "subsystem": "nvmf", 00:17:42.879 "config": [ 00:17:42.879 { 00:17:42.879 "method": "nvmf_set_config", 00:17:42.879 "params": { 00:17:42.879 "discovery_filter": "match_any", 00:17:42.879 "admin_cmd_passthru": { 00:17:42.879 "identify_ctrlr": false 00:17:42.879 } 00:17:42.879 } 00:17:42.879 }, 00:17:42.879 { 00:17:42.879 "method": "nvmf_set_max_subsystems", 00:17:42.879 "params": { 00:17:42.879 "max_subsystems": 1024 00:17:42.879 } 00:17:42.879 }, 00:17:42.879 { 00:17:42.879 "method": "nvmf_set_crdt", 00:17:42.879 "params": { 00:17:42.879 "crdt1": 0, 00:17:42.879 "crdt2": 0, 00:17:42.879 "crdt3": 0 00:17:42.879 } 00:17:42.879 }, 00:17:42.879 { 00:17:42.879 "method": "nvmf_create_transport", 00:17:42.879 "params": { 00:17:42.879 "trtype": "TCP", 00:17:42.879 "max_queue_depth": 128, 00:17:42.879 "max_io_qpairs_per_ctrlr": 127, 00:17:42.879 "in_capsule_data_size": 4096, 00:17:42.879 "max_io_size": 131072, 00:17:42.879 "io_unit_size": 131072, 00:17:42.879 "max_aq_depth": 128, 00:17:42.879 "num_shared_buffers": 511, 00:17:42.879 "buf_cache_size": 4294967295, 00:17:42.879 "dif_insert_or_strip": false, 00:17:42.879 "zcopy": false, 00:17:42.879 "c2h_success": false, 00:17:42.879 "sock_priority": 0, 00:17:42.879 "abort_timeout_sec": 1, 00:17:42.879 "ack_timeout": 0, 00:17:42.879 "data_wr_pool_size": 0 00:17:42.879 } 00:17:42.879 }, 00:17:42.879 { 00:17:42.879 "method": "nvmf_create_subsystem", 00:17:42.879 "params": { 00:17:42.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.879 "allow_any_host": false, 00:17:42.879 "serial_number": "SPDK00000000000001", 00:17:42.879 "model_number": "SPDK bdev Controller", 00:17:42.879 "max_namespaces": 10, 00:17:42.879 "min_cntlid": 1, 00:17:42.879 "max_cntlid": 65519, 00:17:42.879 "ana_reporting": false 00:17:42.879 } 00:17:42.879 }, 00:17:42.879 { 00:17:42.879 "method": "nvmf_subsystem_add_host", 00:17:42.879 "params": { 00:17:42.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.879 "host": "nqn.2016-06.io.spdk:host1", 00:17:42.879 "psk": "/tmp/tmp.AfjINMTOHw" 00:17:42.879 } 00:17:42.879 }, 00:17:42.879 { 00:17:42.879 "method": "nvmf_subsystem_add_ns", 00:17:42.879 "params": { 00:17:42.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.879 "namespace": { 00:17:42.879 "nsid": 1, 00:17:42.879 "bdev_name": "malloc0", 00:17:42.879 "nguid": "DD961967D101497DB7FDAC35B34D5D35", 00:17:42.879 "uuid": "dd961967-d101-497d-b7fd-ac35b34d5d35", 00:17:42.879 "no_auto_visible": false 00:17:42.879 } 00:17:42.879 } 00:17:42.879 }, 00:17:42.879 { 00:17:42.879 "method": "nvmf_subsystem_add_listener", 00:17:42.879 "params": { 00:17:42.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.879 "listen_address": { 00:17:42.879 "trtype": "TCP", 00:17:42.879 "adrfam": "IPv4", 00:17:42.879 "traddr": "10.0.0.2", 00:17:42.879 "trsvcid": "4420" 00:17:42.879 }, 00:17:42.879 "secure_channel": true 00:17:42.879 } 00:17:42.879 } 00:17:42.879 ] 00:17:42.879 } 00:17:42.879 ] 00:17:42.879 }' 00:17:42.879 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.879 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2580933 00:17:42.879 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:42.879 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2580933 00:17:42.879 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2580933 ']' 00:17:42.879 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.879 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:42.879 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.879 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:42.879 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.879 [2024-07-24 19:14:48.871541] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:17:42.879 [2024-07-24 19:14:48.871647] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.138 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.138 [2024-07-24 19:14:48.930714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.138 [2024-07-24 19:14:49.025008] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.138 [2024-07-24 19:14:49.025064] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.138 [2024-07-24 19:14:49.025089] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.138 [2024-07-24 19:14:49.025100] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.138 [2024-07-24 19:14:49.025110] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.138 [2024-07-24 19:14:49.025187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.397 [2024-07-24 19:14:49.238357] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.397 [2024-07-24 19:14:49.261852] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:43.397 [2024-07-24 19:14:49.277919] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:43.397 [2024-07-24 19:14:49.278168] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.963 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:43.963 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:43.963 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:43.963 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:43.963 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.963 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.963 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2581058 00:17:43.963 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2581058 /var/tmp/bdevperf.sock 00:17:43.964 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2581058 ']' 00:17:43.964 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:43.964 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:43.964 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:43.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:43.964 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:43.964 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:43.964 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.964 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:17:43.964 "subsystems": [ 00:17:43.964 { 00:17:43.964 "subsystem": "keyring", 00:17:43.964 "config": [] 00:17:43.964 }, 00:17:43.964 { 00:17:43.964 "subsystem": "iobuf", 00:17:43.964 "config": [ 00:17:43.964 { 00:17:43.964 "method": "iobuf_set_options", 00:17:43.964 "params": { 00:17:43.964 "small_pool_count": 8192, 00:17:43.964 "large_pool_count": 1024, 00:17:43.964 "small_bufsize": 8192, 00:17:43.964 "large_bufsize": 135168 00:17:43.964 } 00:17:43.964 } 00:17:43.964 ] 00:17:43.964 }, 00:17:43.964 { 00:17:43.964 "subsystem": "sock", 00:17:43.964 "config": [ 00:17:43.964 { 00:17:43.964 "method": "sock_set_default_impl", 00:17:43.964 "params": { 00:17:43.964 "impl_name": "posix" 00:17:43.964 } 00:17:43.964 }, 00:17:43.964 { 00:17:43.964 "method": "sock_impl_set_options", 00:17:43.964 "params": { 00:17:43.964 "impl_name": "ssl", 00:17:43.964 "recv_buf_size": 4096, 00:17:43.964 "send_buf_size": 4096, 00:17:43.964 "enable_recv_pipe": true, 00:17:43.964 "enable_quickack": false, 00:17:43.964 "enable_placement_id": 0, 00:17:43.964 "enable_zerocopy_send_server": true, 00:17:43.964 "enable_zerocopy_send_client": false, 00:17:43.964 "zerocopy_threshold": 0, 00:17:43.964 "tls_version": 0, 00:17:43.964 "enable_ktls": false 00:17:43.964 } 00:17:43.964 }, 00:17:43.964 { 00:17:43.964 "method": "sock_impl_set_options", 00:17:43.964 "params": { 00:17:43.964 "impl_name": "posix", 00:17:43.964 "recv_buf_size": 2097152, 00:17:43.964 "send_buf_size": 2097152, 00:17:43.964 "enable_recv_pipe": true, 00:17:43.964 "enable_quickack": false, 00:17:43.964 "enable_placement_id": 0, 00:17:43.964 "enable_zerocopy_send_server": true, 00:17:43.964 "enable_zerocopy_send_client": false, 00:17:43.964 "zerocopy_threshold": 0, 00:17:43.964 "tls_version": 0, 00:17:43.964 "enable_ktls": false 00:17:43.964 } 00:17:43.964 } 00:17:43.964 ] 00:17:43.964 }, 00:17:43.964 { 00:17:43.964 "subsystem": "vmd", 00:17:43.964 "config": [] 00:17:43.964 }, 00:17:43.964 { 00:17:43.964 "subsystem": "accel", 00:17:43.964 "config": [ 00:17:43.964 { 00:17:43.964 "method": "accel_set_options", 00:17:43.964 "params": { 00:17:43.964 "small_cache_size": 128, 00:17:43.964 "large_cache_size": 16, 00:17:43.964 "task_count": 2048, 00:17:43.964 "sequence_count": 2048, 00:17:43.964 "buf_count": 2048 00:17:43.964 } 00:17:43.964 } 00:17:43.964 ] 00:17:43.964 }, 00:17:43.964 { 00:17:43.964 "subsystem": "bdev", 00:17:43.964 "config": [ 00:17:43.964 { 00:17:43.964 "method": "bdev_set_options", 00:17:43.964 "params": { 00:17:43.964 "bdev_io_pool_size": 65535, 00:17:43.964 "bdev_io_cache_size": 256, 00:17:43.964 "bdev_auto_examine": true, 00:17:43.964 "iobuf_small_cache_size": 128, 00:17:43.964 "iobuf_large_cache_size": 16 00:17:43.964 } 00:17:43.964 }, 00:17:43.964 { 00:17:43.964 "method": "bdev_raid_set_options", 00:17:43.964 "params": { 00:17:43.964 "process_window_size_kb": 1024, 00:17:43.964 "process_max_bandwidth_mb_sec": 0 00:17:43.964 } 00:17:43.964 }, 00:17:43.964 { 00:17:43.964 "method": "bdev_iscsi_set_options", 00:17:43.964 "params": { 00:17:43.964 "timeout_sec": 30 00:17:43.964 } 00:17:43.964 }, 00:17:43.964 { 00:17:43.964 "method": "bdev_nvme_set_options", 00:17:43.964 "params": { 00:17:43.964 "action_on_timeout": "none", 00:17:43.964 "timeout_us": 0, 00:17:43.964 "timeout_admin_us": 0, 00:17:43.964 "keep_alive_timeout_ms": 10000, 00:17:43.964 "arbitration_burst": 0, 00:17:43.964 "low_priority_weight": 0, 00:17:43.964 "medium_priority_weight": 0, 00:17:43.964 "high_priority_weight": 0, 00:17:43.964 "nvme_adminq_poll_period_us": 10000, 00:17:43.964 "nvme_ioq_poll_period_us": 0, 00:17:43.964 "io_queue_requests": 512, 00:17:43.964 "delay_cmd_submit": true, 00:17:43.964 "transport_retry_count": 4, 00:17:43.964 "bdev_retry_count": 3, 00:17:43.964 "transport_ack_timeout": 0, 00:17:43.964 "ctrlr_loss_timeout_sec": 0, 00:17:43.964 "reconnect_delay_sec": 0, 00:17:43.964 "fast_io_fail_timeout_sec": 0, 00:17:43.964 "disable_auto_failback": false, 00:17:43.964 "generate_uuids": false, 00:17:43.964 "transport_tos": 0, 00:17:43.964 "nvme_error_stat": false, 00:17:43.964 "rdma_srq_size": 0, 00:17:43.964 "io_path_stat": false, 00:17:43.964 "allow_accel_sequence": false, 00:17:43.964 "rdma_max_cq_size": 0, 00:17:43.964 "rdma_cm_event_timeout_ms": 0, 00:17:43.964 "dhchap_digests": [ 00:17:43.964 "sha256", 00:17:43.964 "sha384", 00:17:43.964 "sha512" 00:17:43.964 ], 00:17:43.964 "dhchap_dhgroups": [ 00:17:43.964 "null", 00:17:43.964 "ffdhe2048", 00:17:43.964 "ffdhe3072", 00:17:43.964 "ffdhe4096", 00:17:43.964 "ffdhe6144", 00:17:43.964 "ffdhe8192" 00:17:43.964 ] 00:17:43.964 } 00:17:43.964 }, 00:17:43.964 { 00:17:43.964 "method": "bdev_nvme_attach_controller", 00:17:43.964 "params": { 00:17:43.964 "name": "TLSTEST", 00:17:43.964 "trtype": "TCP", 00:17:43.964 "adrfam": "IPv4", 00:17:43.964 "traddr": "10.0.0.2", 00:17:43.964 "trsvcid": "4420", 00:17:43.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.964 "prchk_reftag": false, 00:17:43.964 "prchk_guard": false, 00:17:43.964 "ctrlr_loss_timeout_sec": 0, 00:17:43.964 "reconnect_delay_sec": 0, 00:17:43.964 "fast_io_fail_timeout_sec": 0, 00:17:43.964 "psk": "/tmp/tmp.AfjINMTOHw", 00:17:43.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.964 "hdgst": false, 00:17:43.964 "ddgst": false 00:17:43.964 } 00:17:43.964 }, 00:17:43.964 { 00:17:43.964 "method": "bdev_nvme_set_hotplug", 00:17:43.964 "params": { 00:17:43.964 "period_us": 100000, 00:17:43.964 "enable": false 00:17:43.964 } 00:17:43.964 }, 00:17:43.964 { 00:17:43.964 "method": "bdev_wait_for_examine" 00:17:43.964 } 00:17:43.964 ] 00:17:43.964 }, 00:17:43.964 { 00:17:43.964 "subsystem": "nbd", 00:17:43.964 "config": [] 00:17:43.964 } 00:17:43.964 ] 00:17:43.964 }' 00:17:44.223 [2024-07-24 19:14:49.991596] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:17:44.223 [2024-07-24 19:14:49.991691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2581058 ] 00:17:44.223 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.223 [2024-07-24 19:14:50.064044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.223 [2024-07-24 19:14:50.184376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.480 [2024-07-24 19:14:50.337005] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:44.480 [2024-07-24 19:14:50.337147] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:45.045 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:45.045 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:45.045 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:45.302 Running I/O for 10 seconds... 00:17:55.262 00:17:55.262 Latency(us) 00:17:55.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.262 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:55.262 Verification LBA range: start 0x0 length 0x2000 00:17:55.262 TLSTESTn1 : 10.02 3102.38 12.12 0.00 0.00 41186.95 6043.88 57089.14 00:17:55.262 =================================================================================================================== 00:17:55.263 Total : 3102.38 12.12 0.00 0.00 41186.95 6043.88 57089.14 00:17:55.263 0 00:17:55.263 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:55.263 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 2581058 00:17:55.263 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2581058 ']' 00:17:55.263 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2581058 00:17:55.263 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:55.263 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:55.263 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2581058 00:17:55.263 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:55.263 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:55.263 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2581058' 00:17:55.263 killing process with pid 2581058 00:17:55.263 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2581058 00:17:55.263 Received shutdown signal, test time was about 10.000000 seconds 00:17:55.263 00:17:55.263 Latency(us) 00:17:55.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.263 =================================================================================================================== 00:17:55.263 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:55.263 [2024-07-24 19:15:01.230576] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:55.263 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2581058 00:17:55.520 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 2580933 00:17:55.520 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2580933 ']' 00:17:55.520 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2580933 00:17:55.520 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:55.520 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:55.520 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2580933 00:17:55.520 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:55.520 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:55.520 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2580933' 00:17:55.520 killing process with pid 2580933 00:17:55.520 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2580933 00:17:55.520 [2024-07-24 19:15:01.444511] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:55.520 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2580933 00:17:55.778 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:17:55.778 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:55.778 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:55.778 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.778 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2582169 00:17:55.778 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:55.778 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2582169 00:17:55.778 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2582169 ']' 00:17:55.778 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.778 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:55.778 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.778 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:55.778 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.778 [2024-07-24 19:15:01.693884] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:17:55.778 [2024-07-24 19:15:01.693973] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.778 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.778 [2024-07-24 19:15:01.756239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.036 [2024-07-24 19:15:01.857527] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.036 [2024-07-24 19:15:01.857587] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.036 [2024-07-24 19:15:01.857621] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.036 [2024-07-24 19:15:01.857633] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.036 [2024-07-24 19:15:01.857642] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.036 [2024-07-24 19:15:01.857676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.036 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:56.036 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:56.036 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:56.036 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:56.036 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.036 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.036 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.AfjINMTOHw 00:17:56.036 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.AfjINMTOHw 00:17:56.036 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:56.293 [2024-07-24 19:15:02.256308] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.293 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:56.859 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:56.859 [2024-07-24 19:15:02.845946] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:56.859 [2024-07-24 19:15:02.846237] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.859 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:57.423 malloc0 00:17:57.423 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:57.679 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AfjINMTOHw 00:17:57.936 [2024-07-24 19:15:03.743424] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:57.936 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2582402 00:17:57.936 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:57.936 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:57.936 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2582402 /var/tmp/bdevperf.sock 00:17:57.936 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2582402 ']' 00:17:57.936 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.936 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:57.936 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.936 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:57.936 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.936 [2024-07-24 19:15:03.811478] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:17:57.936 [2024-07-24 19:15:03.811584] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2582402 ] 00:17:57.936 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.936 [2024-07-24 19:15:03.873034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.193 [2024-07-24 19:15:03.991802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.193 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:58.193 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:58.193 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AfjINMTOHw 00:17:58.450 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:58.708 [2024-07-24 19:15:04.674703] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.967 nvme0n1 00:17:58.967 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:58.967 Running I/O for 1 seconds... 00:18:00.343 00:18:00.343 Latency(us) 00:18:00.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.343 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:00.343 Verification LBA range: start 0x0 length 0x2000 00:18:00.343 nvme0n1 : 1.02 3158.16 12.34 0.00 0.00 40089.15 7912.87 37865.24 00:18:00.343 =================================================================================================================== 00:18:00.343 Total : 3158.16 12.34 0.00 0.00 40089.15 7912.87 37865.24 00:18:00.343 0 00:18:00.343 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 2582402 00:18:00.343 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2582402 ']' 00:18:00.343 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2582402 00:18:00.343 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:00.343 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:00.343 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2582402 00:18:00.343 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:00.343 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:00.343 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2582402' 00:18:00.343 killing process with pid 2582402 00:18:00.343 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2582402 00:18:00.343 Received shutdown signal, test time was about 1.000000 seconds 00:18:00.343 00:18:00.343 Latency(us) 00:18:00.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.343 =================================================================================================================== 00:18:00.343 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:00.343 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2582402 00:18:00.343 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 2582169 00:18:00.343 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2582169 ']' 00:18:00.343 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2582169 00:18:00.343 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:00.343 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:00.343 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2582169 00:18:00.343 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:00.343 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:00.343 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2582169' 00:18:00.343 killing process with pid 2582169 00:18:00.343 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2582169 00:18:00.343 [2024-07-24 19:15:06.217561] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:00.343 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2582169 00:18:00.601 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:18:00.601 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:00.601 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:00.601 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.601 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2582707 00:18:00.601 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:00.601 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2582707 00:18:00.601 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2582707 ']' 00:18:00.601 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.601 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:00.601 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.602 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:00.602 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.602 [2024-07-24 19:15:06.505544] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:18:00.602 [2024-07-24 19:15:06.505645] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.602 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.602 [2024-07-24 19:15:06.570138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.894 [2024-07-24 19:15:06.686759] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.894 [2024-07-24 19:15:06.686826] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.894 [2024-07-24 19:15:06.686841] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.894 [2024-07-24 19:15:06.686855] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.894 [2024-07-24 19:15:06.686866] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.894 [2024-07-24 19:15:06.686912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.894 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:00.894 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:00.894 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:00.894 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:00.894 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.894 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.894 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:18:00.894 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.894 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.894 [2024-07-24 19:15:06.822975] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.894 malloc0 00:18:00.894 [2024-07-24 19:15:06.854000] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:00.894 [2024-07-24 19:15:06.871652] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.180 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.180 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2582732 00:18:01.180 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2582732 /var/tmp/bdevperf.sock 00:18:01.180 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2582732 ']' 00:18:01.180 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.180 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:01.180 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:01.180 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.180 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:01.180 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.180 [2024-07-24 19:15:06.943065] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:18:01.180 [2024-07-24 19:15:06.943157] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2582732 ] 00:18:01.180 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.180 [2024-07-24 19:15:07.004342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.180 [2024-07-24 19:15:07.121325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.438 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:01.438 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:01.438 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AfjINMTOHw 00:18:01.696 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:01.954 [2024-07-24 19:15:07.798550] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:01.954 nvme0n1 00:18:01.954 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:02.212 Running I/O for 1 seconds... 00:18:03.146 00:18:03.146 Latency(us) 00:18:03.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.146 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:03.146 Verification LBA range: start 0x0 length 0x2000 00:18:03.146 nvme0n1 : 1.02 3107.32 12.14 0.00 0.00 40687.87 6893.42 52817.16 00:18:03.146 =================================================================================================================== 00:18:03.146 Total : 3107.32 12.14 0.00 0.00 40687.87 6893.42 52817.16 00:18:03.146 0 00:18:03.146 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:18:03.146 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.146 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.146 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.146 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:18:03.146 "subsystems": [ 00:18:03.146 { 00:18:03.146 "subsystem": "keyring", 00:18:03.146 "config": [ 00:18:03.146 { 00:18:03.146 "method": "keyring_file_add_key", 00:18:03.146 "params": { 00:18:03.146 "name": "key0", 00:18:03.146 "path": "/tmp/tmp.AfjINMTOHw" 00:18:03.146 } 00:18:03.146 } 00:18:03.146 ] 00:18:03.146 }, 00:18:03.146 { 00:18:03.146 "subsystem": "iobuf", 00:18:03.146 "config": [ 00:18:03.146 { 00:18:03.146 "method": "iobuf_set_options", 00:18:03.146 "params": { 00:18:03.146 "small_pool_count": 8192, 00:18:03.146 "large_pool_count": 1024, 00:18:03.146 "small_bufsize": 8192, 00:18:03.146 "large_bufsize": 135168 00:18:03.146 } 00:18:03.146 } 00:18:03.146 ] 00:18:03.146 }, 00:18:03.146 { 00:18:03.146 "subsystem": "sock", 00:18:03.146 "config": [ 00:18:03.146 { 00:18:03.146 "method": "sock_set_default_impl", 00:18:03.146 "params": { 00:18:03.146 "impl_name": "posix" 00:18:03.146 } 00:18:03.146 }, 00:18:03.146 { 00:18:03.146 "method": "sock_impl_set_options", 00:18:03.146 "params": { 00:18:03.146 "impl_name": "ssl", 00:18:03.146 "recv_buf_size": 4096, 00:18:03.146 "send_buf_size": 4096, 00:18:03.146 "enable_recv_pipe": true, 00:18:03.146 "enable_quickack": false, 00:18:03.146 "enable_placement_id": 0, 00:18:03.146 "enable_zerocopy_send_server": true, 00:18:03.146 "enable_zerocopy_send_client": false, 00:18:03.146 "zerocopy_threshold": 0, 00:18:03.146 "tls_version": 0, 00:18:03.146 "enable_ktls": false 00:18:03.146 } 00:18:03.146 }, 00:18:03.146 { 00:18:03.146 "method": "sock_impl_set_options", 00:18:03.146 "params": { 00:18:03.146 "impl_name": "posix", 00:18:03.146 "recv_buf_size": 2097152, 00:18:03.146 "send_buf_size": 2097152, 00:18:03.146 "enable_recv_pipe": true, 00:18:03.146 "enable_quickack": false, 00:18:03.146 "enable_placement_id": 0, 00:18:03.146 "enable_zerocopy_send_server": true, 00:18:03.146 "enable_zerocopy_send_client": false, 00:18:03.146 "zerocopy_threshold": 0, 00:18:03.146 "tls_version": 0, 00:18:03.146 "enable_ktls": false 00:18:03.146 } 00:18:03.146 } 00:18:03.146 ] 00:18:03.146 }, 00:18:03.146 { 00:18:03.146 "subsystem": "vmd", 00:18:03.146 "config": [] 00:18:03.146 }, 00:18:03.146 { 00:18:03.146 "subsystem": "accel", 00:18:03.146 "config": [ 00:18:03.146 { 00:18:03.146 "method": "accel_set_options", 00:18:03.146 "params": { 00:18:03.146 "small_cache_size": 128, 00:18:03.146 "large_cache_size": 16, 00:18:03.146 "task_count": 2048, 00:18:03.146 "sequence_count": 2048, 00:18:03.146 "buf_count": 2048 00:18:03.146 } 00:18:03.146 } 00:18:03.146 ] 00:18:03.146 }, 00:18:03.146 { 00:18:03.146 "subsystem": "bdev", 00:18:03.146 "config": [ 00:18:03.146 { 00:18:03.146 "method": "bdev_set_options", 00:18:03.146 "params": { 00:18:03.146 "bdev_io_pool_size": 65535, 00:18:03.146 "bdev_io_cache_size": 256, 00:18:03.146 "bdev_auto_examine": true, 00:18:03.146 "iobuf_small_cache_size": 128, 00:18:03.146 "iobuf_large_cache_size": 16 00:18:03.146 } 00:18:03.146 }, 00:18:03.146 { 00:18:03.146 "method": "bdev_raid_set_options", 00:18:03.146 "params": { 00:18:03.146 "process_window_size_kb": 1024, 00:18:03.146 "process_max_bandwidth_mb_sec": 0 00:18:03.146 } 00:18:03.146 }, 00:18:03.146 { 00:18:03.146 "method": "bdev_iscsi_set_options", 00:18:03.146 "params": { 00:18:03.146 "timeout_sec": 30 00:18:03.146 } 00:18:03.146 }, 00:18:03.146 { 00:18:03.146 "method": "bdev_nvme_set_options", 00:18:03.146 "params": { 00:18:03.146 "action_on_timeout": "none", 00:18:03.146 "timeout_us": 0, 00:18:03.146 "timeout_admin_us": 0, 00:18:03.146 "keep_alive_timeout_ms": 10000, 00:18:03.146 "arbitration_burst": 0, 00:18:03.146 "low_priority_weight": 0, 00:18:03.146 "medium_priority_weight": 0, 00:18:03.146 "high_priority_weight": 0, 00:18:03.146 "nvme_adminq_poll_period_us": 10000, 00:18:03.146 "nvme_ioq_poll_period_us": 0, 00:18:03.146 "io_queue_requests": 0, 00:18:03.146 "delay_cmd_submit": true, 00:18:03.146 "transport_retry_count": 4, 00:18:03.146 "bdev_retry_count": 3, 00:18:03.146 "transport_ack_timeout": 0, 00:18:03.146 "ctrlr_loss_timeout_sec": 0, 00:18:03.146 "reconnect_delay_sec": 0, 00:18:03.146 "fast_io_fail_timeout_sec": 0, 00:18:03.146 "disable_auto_failback": false, 00:18:03.146 "generate_uuids": false, 00:18:03.146 "transport_tos": 0, 00:18:03.146 "nvme_error_stat": false, 00:18:03.146 "rdma_srq_size": 0, 00:18:03.146 "io_path_stat": false, 00:18:03.146 "allow_accel_sequence": false, 00:18:03.146 "rdma_max_cq_size": 0, 00:18:03.146 "rdma_cm_event_timeout_ms": 0, 00:18:03.146 "dhchap_digests": [ 00:18:03.146 "sha256", 00:18:03.146 "sha384", 00:18:03.146 "sha512" 00:18:03.146 ], 00:18:03.146 "dhchap_dhgroups": [ 00:18:03.146 "null", 00:18:03.146 "ffdhe2048", 00:18:03.146 "ffdhe3072", 00:18:03.146 "ffdhe4096", 00:18:03.146 "ffdhe6144", 00:18:03.146 "ffdhe8192" 00:18:03.146 ] 00:18:03.146 } 00:18:03.146 }, 00:18:03.146 { 00:18:03.146 "method": "bdev_nvme_set_hotplug", 00:18:03.146 "params": { 00:18:03.146 "period_us": 100000, 00:18:03.146 "enable": false 00:18:03.146 } 00:18:03.146 }, 00:18:03.146 { 00:18:03.146 "method": "bdev_malloc_create", 00:18:03.146 "params": { 00:18:03.146 "name": "malloc0", 00:18:03.146 "num_blocks": 8192, 00:18:03.146 "block_size": 4096, 00:18:03.146 "physical_block_size": 4096, 00:18:03.146 "uuid": "b72d93b7-4cb9-41bd-89eb-c5432d6a9433", 00:18:03.146 "optimal_io_boundary": 0, 00:18:03.146 "md_size": 0, 00:18:03.146 "dif_type": 0, 00:18:03.146 "dif_is_head_of_md": false, 00:18:03.146 "dif_pi_format": 0 00:18:03.146 } 00:18:03.146 }, 00:18:03.146 { 00:18:03.146 "method": "bdev_wait_for_examine" 00:18:03.146 } 00:18:03.146 ] 00:18:03.146 }, 00:18:03.146 { 00:18:03.146 "subsystem": "nbd", 00:18:03.146 "config": [] 00:18:03.146 }, 00:18:03.146 { 00:18:03.146 "subsystem": "scheduler", 00:18:03.146 "config": [ 00:18:03.146 { 00:18:03.146 "method": "framework_set_scheduler", 00:18:03.146 "params": { 00:18:03.146 "name": "static" 00:18:03.146 } 00:18:03.146 } 00:18:03.146 ] 00:18:03.146 }, 00:18:03.146 { 00:18:03.146 "subsystem": "nvmf", 00:18:03.146 "config": [ 00:18:03.146 { 00:18:03.146 "method": "nvmf_set_config", 00:18:03.146 "params": { 00:18:03.146 "discovery_filter": "match_any", 00:18:03.146 "admin_cmd_passthru": { 00:18:03.146 "identify_ctrlr": false 00:18:03.146 } 00:18:03.146 } 00:18:03.146 }, 00:18:03.146 { 00:18:03.146 "method": "nvmf_set_max_subsystems", 00:18:03.146 "params": { 00:18:03.146 "max_subsystems": 1024 00:18:03.146 } 00:18:03.146 }, 00:18:03.146 { 00:18:03.146 "method": "nvmf_set_crdt", 00:18:03.146 "params": { 00:18:03.146 "crdt1": 0, 00:18:03.146 "crdt2": 0, 00:18:03.146 "crdt3": 0 00:18:03.146 } 00:18:03.146 }, 00:18:03.146 { 00:18:03.146 "method": "nvmf_create_transport", 00:18:03.146 "params": { 00:18:03.146 "trtype": "TCP", 00:18:03.146 "max_queue_depth": 128, 00:18:03.146 "max_io_qpairs_per_ctrlr": 127, 00:18:03.146 "in_capsule_data_size": 4096, 00:18:03.146 "max_io_size": 131072, 00:18:03.146 "io_unit_size": 131072, 00:18:03.146 "max_aq_depth": 128, 00:18:03.146 "num_shared_buffers": 511, 00:18:03.146 "buf_cache_size": 4294967295, 00:18:03.147 "dif_insert_or_strip": false, 00:18:03.147 "zcopy": false, 00:18:03.147 "c2h_success": false, 00:18:03.147 "sock_priority": 0, 00:18:03.147 "abort_timeout_sec": 1, 00:18:03.147 "ack_timeout": 0, 00:18:03.147 "data_wr_pool_size": 0 00:18:03.147 } 00:18:03.147 }, 00:18:03.147 { 00:18:03.147 "method": "nvmf_create_subsystem", 00:18:03.147 "params": { 00:18:03.147 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.147 "allow_any_host": false, 00:18:03.147 "serial_number": "00000000000000000000", 00:18:03.147 "model_number": "SPDK bdev Controller", 00:18:03.147 "max_namespaces": 32, 00:18:03.147 "min_cntlid": 1, 00:18:03.147 "max_cntlid": 65519, 00:18:03.147 "ana_reporting": false 00:18:03.147 } 00:18:03.147 }, 00:18:03.147 { 00:18:03.147 "method": "nvmf_subsystem_add_host", 00:18:03.147 "params": { 00:18:03.147 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.147 "host": "nqn.2016-06.io.spdk:host1", 00:18:03.147 "psk": "key0" 00:18:03.147 } 00:18:03.147 }, 00:18:03.147 { 00:18:03.147 "method": "nvmf_subsystem_add_ns", 00:18:03.147 "params": { 00:18:03.147 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.147 "namespace": { 00:18:03.147 "nsid": 1, 00:18:03.147 "bdev_name": "malloc0", 00:18:03.147 "nguid": "B72D93B74CB941BD89EBC5432D6A9433", 00:18:03.147 "uuid": "b72d93b7-4cb9-41bd-89eb-c5432d6a9433", 00:18:03.147 "no_auto_visible": false 00:18:03.147 } 00:18:03.147 } 00:18:03.147 }, 00:18:03.147 { 00:18:03.147 "method": "nvmf_subsystem_add_listener", 00:18:03.147 "params": { 00:18:03.147 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.147 "listen_address": { 00:18:03.147 "trtype": "TCP", 00:18:03.147 "adrfam": "IPv4", 00:18:03.147 "traddr": "10.0.0.2", 00:18:03.147 "trsvcid": "4420" 00:18:03.147 }, 00:18:03.147 "secure_channel": false, 00:18:03.147 "sock_impl": "ssl" 00:18:03.147 } 00:18:03.147 } 00:18:03.147 ] 00:18:03.147 } 00:18:03.147 ] 00:18:03.147 }' 00:18:03.147 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:03.713 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:18:03.713 "subsystems": [ 00:18:03.713 { 00:18:03.713 "subsystem": "keyring", 00:18:03.713 "config": [ 00:18:03.713 { 00:18:03.713 "method": "keyring_file_add_key", 00:18:03.713 "params": { 00:18:03.713 "name": "key0", 00:18:03.713 "path": "/tmp/tmp.AfjINMTOHw" 00:18:03.713 } 00:18:03.713 } 00:18:03.713 ] 00:18:03.713 }, 00:18:03.713 { 00:18:03.713 "subsystem": "iobuf", 00:18:03.713 "config": [ 00:18:03.713 { 00:18:03.713 "method": "iobuf_set_options", 00:18:03.713 "params": { 00:18:03.713 "small_pool_count": 8192, 00:18:03.713 "large_pool_count": 1024, 00:18:03.713 "small_bufsize": 8192, 00:18:03.713 "large_bufsize": 135168 00:18:03.713 } 00:18:03.713 } 00:18:03.713 ] 00:18:03.713 }, 00:18:03.713 { 00:18:03.713 "subsystem": "sock", 00:18:03.713 "config": [ 00:18:03.713 { 00:18:03.713 "method": "sock_set_default_impl", 00:18:03.713 "params": { 00:18:03.713 "impl_name": "posix" 00:18:03.713 } 00:18:03.713 }, 00:18:03.713 { 00:18:03.713 "method": "sock_impl_set_options", 00:18:03.713 "params": { 00:18:03.713 "impl_name": "ssl", 00:18:03.713 "recv_buf_size": 4096, 00:18:03.713 "send_buf_size": 4096, 00:18:03.713 "enable_recv_pipe": true, 00:18:03.714 "enable_quickack": false, 00:18:03.714 "enable_placement_id": 0, 00:18:03.714 "enable_zerocopy_send_server": true, 00:18:03.714 "enable_zerocopy_send_client": false, 00:18:03.714 "zerocopy_threshold": 0, 00:18:03.714 "tls_version": 0, 00:18:03.714 "enable_ktls": false 00:18:03.714 } 00:18:03.714 }, 00:18:03.714 { 00:18:03.714 "method": "sock_impl_set_options", 00:18:03.714 "params": { 00:18:03.714 "impl_name": "posix", 00:18:03.714 "recv_buf_size": 2097152, 00:18:03.714 "send_buf_size": 2097152, 00:18:03.714 "enable_recv_pipe": true, 00:18:03.714 "enable_quickack": false, 00:18:03.714 "enable_placement_id": 0, 00:18:03.714 "enable_zerocopy_send_server": true, 00:18:03.714 "enable_zerocopy_send_client": false, 00:18:03.714 "zerocopy_threshold": 0, 00:18:03.714 "tls_version": 0, 00:18:03.714 "enable_ktls": false 00:18:03.714 } 00:18:03.714 } 00:18:03.714 ] 00:18:03.714 }, 00:18:03.714 { 00:18:03.714 "subsystem": "vmd", 00:18:03.714 "config": [] 00:18:03.714 }, 00:18:03.714 { 00:18:03.714 "subsystem": "accel", 00:18:03.714 "config": [ 00:18:03.714 { 00:18:03.714 "method": "accel_set_options", 00:18:03.714 "params": { 00:18:03.714 "small_cache_size": 128, 00:18:03.714 "large_cache_size": 16, 00:18:03.714 "task_count": 2048, 00:18:03.714 "sequence_count": 2048, 00:18:03.714 "buf_count": 2048 00:18:03.714 } 00:18:03.714 } 00:18:03.714 ] 00:18:03.714 }, 00:18:03.714 { 00:18:03.714 "subsystem": "bdev", 00:18:03.714 "config": [ 00:18:03.714 { 00:18:03.714 "method": "bdev_set_options", 00:18:03.714 "params": { 00:18:03.714 "bdev_io_pool_size": 65535, 00:18:03.714 "bdev_io_cache_size": 256, 00:18:03.714 "bdev_auto_examine": true, 00:18:03.714 "iobuf_small_cache_size": 128, 00:18:03.714 "iobuf_large_cache_size": 16 00:18:03.714 } 00:18:03.714 }, 00:18:03.714 { 00:18:03.714 "method": "bdev_raid_set_options", 00:18:03.714 "params": { 00:18:03.714 "process_window_size_kb": 1024, 00:18:03.714 "process_max_bandwidth_mb_sec": 0 00:18:03.714 } 00:18:03.714 }, 00:18:03.714 { 00:18:03.714 "method": "bdev_iscsi_set_options", 00:18:03.714 "params": { 00:18:03.714 "timeout_sec": 30 00:18:03.714 } 00:18:03.714 }, 00:18:03.714 { 00:18:03.714 "method": "bdev_nvme_set_options", 00:18:03.714 "params": { 00:18:03.714 "action_on_timeout": "none", 00:18:03.714 "timeout_us": 0, 00:18:03.714 "timeout_admin_us": 0, 00:18:03.714 "keep_alive_timeout_ms": 10000, 00:18:03.714 "arbitration_burst": 0, 00:18:03.714 "low_priority_weight": 0, 00:18:03.714 "medium_priority_weight": 0, 00:18:03.714 "high_priority_weight": 0, 00:18:03.714 "nvme_adminq_poll_period_us": 10000, 00:18:03.714 "nvme_ioq_poll_period_us": 0, 00:18:03.714 "io_queue_requests": 512, 00:18:03.714 "delay_cmd_submit": true, 00:18:03.714 "transport_retry_count": 4, 00:18:03.714 "bdev_retry_count": 3, 00:18:03.714 "transport_ack_timeout": 0, 00:18:03.714 "ctrlr_loss_timeout_sec": 0, 00:18:03.714 "reconnect_delay_sec": 0, 00:18:03.714 "fast_io_fail_timeout_sec": 0, 00:18:03.714 "disable_auto_failback": false, 00:18:03.714 "generate_uuids": false, 00:18:03.714 "transport_tos": 0, 00:18:03.714 "nvme_error_stat": false, 00:18:03.714 "rdma_srq_size": 0, 00:18:03.714 "io_path_stat": false, 00:18:03.714 "allow_accel_sequence": false, 00:18:03.714 "rdma_max_cq_size": 0, 00:18:03.714 "rdma_cm_event_timeout_ms": 0, 00:18:03.714 "dhchap_digests": [ 00:18:03.714 "sha256", 00:18:03.714 "sha384", 00:18:03.714 "sha512" 00:18:03.714 ], 00:18:03.714 "dhchap_dhgroups": [ 00:18:03.714 "null", 00:18:03.714 "ffdhe2048", 00:18:03.714 "ffdhe3072", 00:18:03.714 "ffdhe4096", 00:18:03.714 "ffdhe6144", 00:18:03.714 "ffdhe8192" 00:18:03.714 ] 00:18:03.714 } 00:18:03.714 }, 00:18:03.714 { 00:18:03.714 "method": "bdev_nvme_attach_controller", 00:18:03.714 "params": { 00:18:03.714 "name": "nvme0", 00:18:03.714 "trtype": "TCP", 00:18:03.714 "adrfam": "IPv4", 00:18:03.714 "traddr": "10.0.0.2", 00:18:03.714 "trsvcid": "4420", 00:18:03.714 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.714 "prchk_reftag": false, 00:18:03.714 "prchk_guard": false, 00:18:03.714 "ctrlr_loss_timeout_sec": 0, 00:18:03.714 "reconnect_delay_sec": 0, 00:18:03.714 "fast_io_fail_timeout_sec": 0, 00:18:03.714 "psk": "key0", 00:18:03.714 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.714 "hdgst": false, 00:18:03.714 "ddgst": false 00:18:03.714 } 00:18:03.714 }, 00:18:03.714 { 00:18:03.714 "method": "bdev_nvme_set_hotplug", 00:18:03.714 "params": { 00:18:03.714 "period_us": 100000, 00:18:03.714 "enable": false 00:18:03.714 } 00:18:03.714 }, 00:18:03.714 { 00:18:03.714 "method": "bdev_enable_histogram", 00:18:03.714 "params": { 00:18:03.714 "name": "nvme0n1", 00:18:03.714 "enable": true 00:18:03.714 } 00:18:03.714 }, 00:18:03.714 { 00:18:03.714 "method": "bdev_wait_for_examine" 00:18:03.714 } 00:18:03.714 ] 00:18:03.714 }, 00:18:03.714 { 00:18:03.714 "subsystem": "nbd", 00:18:03.714 "config": [] 00:18:03.714 } 00:18:03.714 ] 00:18:03.714 }' 00:18:03.714 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 2582732 00:18:03.714 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2582732 ']' 00:18:03.714 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2582732 00:18:03.714 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:03.714 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:03.714 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2582732 00:18:03.714 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:03.714 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:03.714 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2582732' 00:18:03.714 killing process with pid 2582732 00:18:03.714 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2582732 00:18:03.714 Received shutdown signal, test time was about 1.000000 seconds 00:18:03.714 00:18:03.714 Latency(us) 00:18:03.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.714 =================================================================================================================== 00:18:03.714 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:03.714 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2582732 00:18:03.973 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 2582707 00:18:03.973 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2582707 ']' 00:18:03.973 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2582707 00:18:03.973 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:03.973 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:03.973 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2582707 00:18:03.973 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:03.973 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:03.973 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2582707' 00:18:03.973 killing process with pid 2582707 00:18:03.973 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2582707 00:18:03.973 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2582707 00:18:04.232 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:18:04.232 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:04.232 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:18:04.232 "subsystems": [ 00:18:04.232 { 00:18:04.232 "subsystem": "keyring", 00:18:04.232 "config": [ 00:18:04.232 { 00:18:04.232 "method": "keyring_file_add_key", 00:18:04.232 "params": { 00:18:04.232 "name": "key0", 00:18:04.232 "path": "/tmp/tmp.AfjINMTOHw" 00:18:04.232 } 00:18:04.232 } 00:18:04.232 ] 00:18:04.232 }, 00:18:04.232 { 00:18:04.232 "subsystem": "iobuf", 00:18:04.232 "config": [ 00:18:04.232 { 00:18:04.232 "method": "iobuf_set_options", 00:18:04.232 "params": { 00:18:04.232 "small_pool_count": 8192, 00:18:04.232 "large_pool_count": 1024, 00:18:04.232 "small_bufsize": 8192, 00:18:04.232 "large_bufsize": 135168 00:18:04.232 } 00:18:04.232 } 00:18:04.232 ] 00:18:04.232 }, 00:18:04.232 { 00:18:04.232 "subsystem": "sock", 00:18:04.232 "config": [ 00:18:04.232 { 00:18:04.232 "method": "sock_set_default_impl", 00:18:04.232 "params": { 00:18:04.232 "impl_name": "posix" 00:18:04.232 } 00:18:04.232 }, 00:18:04.232 { 00:18:04.232 "method": "sock_impl_set_options", 00:18:04.232 "params": { 00:18:04.232 "impl_name": "ssl", 00:18:04.232 "recv_buf_size": 4096, 00:18:04.232 "send_buf_size": 4096, 00:18:04.232 "enable_recv_pipe": true, 00:18:04.232 "enable_quickack": false, 00:18:04.232 "enable_placement_id": 0, 00:18:04.232 "enable_zerocopy_send_server": true, 00:18:04.232 "enable_zerocopy_send_client": false, 00:18:04.232 "zerocopy_threshold": 0, 00:18:04.232 "tls_version": 0, 00:18:04.232 "enable_ktls": false 00:18:04.232 } 00:18:04.232 }, 00:18:04.232 { 00:18:04.232 "method": "sock_impl_set_options", 00:18:04.232 "params": { 00:18:04.232 "impl_name": "posix", 00:18:04.232 "recv_buf_size": 2097152, 00:18:04.232 "send_buf_size": 2097152, 00:18:04.232 "enable_recv_pipe": true, 00:18:04.232 "enable_quickack": false, 00:18:04.232 "enable_placement_id": 0, 00:18:04.232 "enable_zerocopy_send_server": true, 00:18:04.232 "enable_zerocopy_send_client": false, 00:18:04.232 "zerocopy_threshold": 0, 00:18:04.232 "tls_version": 0, 00:18:04.232 "enable_ktls": false 00:18:04.232 } 00:18:04.232 } 00:18:04.232 ] 00:18:04.232 }, 00:18:04.232 { 00:18:04.232 "subsystem": "vmd", 00:18:04.232 "config": [] 00:18:04.232 }, 00:18:04.232 { 00:18:04.232 "subsystem": "accel", 00:18:04.232 "config": [ 00:18:04.232 { 00:18:04.232 "method": "accel_set_options", 00:18:04.232 "params": { 00:18:04.232 "small_cache_size": 128, 00:18:04.232 "large_cache_size": 16, 00:18:04.232 "task_count": 2048, 00:18:04.232 "sequence_count": 2048, 00:18:04.232 "buf_count": 2048 00:18:04.232 } 00:18:04.232 } 00:18:04.232 ] 00:18:04.232 }, 00:18:04.232 { 00:18:04.232 "subsystem": "bdev", 00:18:04.232 "config": [ 00:18:04.232 { 00:18:04.232 "method": "bdev_set_options", 00:18:04.232 "params": { 00:18:04.232 "bdev_io_pool_size": 65535, 00:18:04.232 "bdev_io_cache_size": 256, 00:18:04.232 "bdev_auto_examine": true, 00:18:04.232 "iobuf_small_cache_size": 128, 00:18:04.232 "iobuf_large_cache_size": 16 00:18:04.232 } 00:18:04.232 }, 00:18:04.232 { 00:18:04.232 "method": "bdev_raid_set_options", 00:18:04.232 "params": { 00:18:04.232 "process_window_size_kb": 1024, 00:18:04.232 "process_max_bandwidth_mb_sec": 0 00:18:04.232 } 00:18:04.232 }, 00:18:04.232 { 00:18:04.232 "method": "bdev_iscsi_set_options", 00:18:04.232 "params": { 00:18:04.232 "timeout_sec": 30 00:18:04.232 } 00:18:04.232 }, 00:18:04.232 { 00:18:04.232 "method": "bdev_nvme_set_options", 00:18:04.232 "params": { 00:18:04.232 "action_on_timeout": "none", 00:18:04.232 "timeout_us": 0, 00:18:04.232 "timeout_admin_us": 0, 00:18:04.232 "keep_alive_timeout_ms": 10000, 00:18:04.232 "arbitration_burst": 0, 00:18:04.232 "low_priority_weight": 0, 00:18:04.232 "medium_priority_weight": 0, 00:18:04.232 "high_priority_weight": 0, 00:18:04.232 "nvme_adminq_poll_period_us": 10000, 00:18:04.232 "nvme_ioq_poll_period_us": 0, 00:18:04.232 "io_queue_requests": 0, 00:18:04.232 "delay_cmd_submit": true, 00:18:04.232 "transport_retry_count": 4, 00:18:04.232 "bdev_retry_count": 3, 00:18:04.232 "transport_ack_timeout": 0, 00:18:04.232 "ctrlr_loss_timeout_sec": 0, 00:18:04.232 "reconnect_delay_sec": 0, 00:18:04.232 "fast_io_fail_timeout_sec": 0, 00:18:04.232 "disable_auto_failback": false, 00:18:04.232 "generate_uuids": false, 00:18:04.232 "transport_tos": 0, 00:18:04.232 "nvme_error_stat": false, 00:18:04.232 "rdma_srq_size": 0, 00:18:04.232 "io_path_stat": false, 00:18:04.232 "allow_accel_sequence": false, 00:18:04.232 "rdma_max_cq_size": 0, 00:18:04.232 "rdma_cm_event_timeout_ms": 0, 00:18:04.232 "dhchap_digests": [ 00:18:04.232 "sha256", 00:18:04.232 "sha384", 00:18:04.232 "sha512" 00:18:04.232 ], 00:18:04.232 "dhchap_dhgroups": [ 00:18:04.232 "null", 00:18:04.232 "ffdhe2048", 00:18:04.232 "ffdhe3072", 00:18:04.232 "ffdhe4096", 00:18:04.232 "ffdhe6144", 00:18:04.232 "ffdhe8192" 00:18:04.232 ] 00:18:04.232 } 00:18:04.232 }, 00:18:04.232 { 00:18:04.232 "method": "bdev_nvme_set_hotplug", 00:18:04.232 "params": { 00:18:04.232 "period_us": 100000, 00:18:04.232 "enable": false 00:18:04.232 } 00:18:04.232 }, 00:18:04.232 { 00:18:04.232 "method": "bdev_malloc_create", 00:18:04.232 "params": { 00:18:04.232 "name": "malloc0", 00:18:04.232 "num_blocks": 8192, 00:18:04.232 "block_size": 4096, 00:18:04.232 "physical_block_size": 4096, 00:18:04.232 "uuid": "b72d93b7-4cb9-41bd-89eb-c5432d6a9433", 00:18:04.232 "optimal_io_boundary": 0, 00:18:04.232 "md_size": 0, 00:18:04.232 "dif_type": 0, 00:18:04.232 "dif_is_head_of_md": false, 00:18:04.232 "dif_pi_format": 0 00:18:04.232 } 00:18:04.232 }, 00:18:04.232 { 00:18:04.232 "method": "bdev_wait_for_examine" 00:18:04.232 } 00:18:04.232 ] 00:18:04.232 }, 00:18:04.232 { 00:18:04.232 "subsystem": "nbd", 00:18:04.232 "config": [] 00:18:04.232 }, 00:18:04.232 { 00:18:04.232 "subsystem": "scheduler", 00:18:04.232 "config": [ 00:18:04.232 { 00:18:04.232 "method": "framework_set_scheduler", 00:18:04.232 "params": { 00:18:04.232 "name": "static" 00:18:04.232 } 00:18:04.232 } 00:18:04.232 ] 00:18:04.232 }, 00:18:04.232 { 00:18:04.232 "subsystem": "nvmf", 00:18:04.232 "config": [ 00:18:04.232 { 00:18:04.232 "method": "nvmf_set_config", 00:18:04.232 "params": { 00:18:04.232 "discovery_filter": "match_any", 00:18:04.232 "admin_cmd_passthru": { 00:18:04.232 "identify_ctrlr": false 00:18:04.232 } 00:18:04.232 } 00:18:04.232 }, 00:18:04.232 { 00:18:04.232 "method": "nvmf_set_max_subsystems", 00:18:04.232 "params": { 00:18:04.232 "max_subsystems": 1024 00:18:04.232 } 00:18:04.232 }, 00:18:04.232 { 00:18:04.232 "method": "nvmf_set_crdt", 00:18:04.232 "params": { 00:18:04.232 "crdt1": 0, 00:18:04.233 "crdt2": 0, 00:18:04.233 "crdt3": 0 00:18:04.233 } 00:18:04.233 }, 00:18:04.233 { 00:18:04.233 "method": "nvmf_create_transport", 00:18:04.233 "params": { 00:18:04.233 "trtype": "TCP", 00:18:04.233 "max_queue_depth": 128, 00:18:04.233 "max_io_qpairs_per_ctrlr": 127, 00:18:04.233 "in_capsule_data_size": 4096, 00:18:04.233 "max_io_size": 131072, 00:18:04.233 "io_unit_size": 131072, 00:18:04.233 "max_aq_depth": 128, 00:18:04.233 "num_shared_buffers": 511, 00:18:04.233 "buf_cache_size": 4294967295, 00:18:04.233 "dif_insert_or_strip": false, 00:18:04.233 "zcopy": false, 00:18:04.233 "c2h_success": false, 00:18:04.233 "sock_priority": 0, 00:18:04.233 "abort_timeout_sec": 1, 00:18:04.233 "ack_timeout": 0, 00:18:04.233 "data_wr_pool_size": 0 00:18:04.233 } 00:18:04.233 }, 00:18:04.233 { 00:18:04.233 "method": "nvmf_create_subsystem", 00:18:04.233 "params": { 00:18:04.233 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.233 "allow_any_host": false, 00:18:04.233 "serial_number": "00000000000000000000", 00:18:04.233 "model_number": "SPDK bdev Controller", 00:18:04.233 "max_namespaces": 32, 00:18:04.233 "min_cntlid": 1, 00:18:04.233 "max_cntlid": 65519, 00:18:04.233 "ana_reporting": false 00:18:04.233 } 00:18:04.233 }, 00:18:04.233 { 00:18:04.233 "method": "nvmf_subsystem_add_host", 00:18:04.233 "params": { 00:18:04.233 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.233 "host": "nqn.2016-06.io.spdk:host1", 00:18:04.233 "psk": "key0" 00:18:04.233 } 00:18:04.233 }, 00:18:04.233 { 00:18:04.233 "method": "nvmf_subsystem_add_ns", 00:18:04.233 "params": { 00:18:04.233 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.233 "namespace": { 00:18:04.233 "nsid": 1, 00:18:04.233 "bdev_name": "malloc0", 00:18:04.233 "nguid": "B72D93B74CB941BD89EBC5432D6A9433", 00:18:04.233 "uuid": "b72d93b7-4cb9-41bd-89eb-c5432d6a9433", 00:18:04.233 "no_auto_visible": false 00:18:04.233 } 00:18:04.233 } 00:18:04.233 }, 00:18:04.233 { 00:18:04.233 "method": "nvmf_subsystem_add_listener", 00:18:04.233 "params": { 00:18:04.233 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.233 "listen_address": { 00:18:04.233 "trtype": "TCP", 00:18:04.233 "adrfam": "IPv4", 00:18:04.233 "traddr": "10.0.0.2", 00:18:04.233 "trsvcid": "4420" 00:18:04.233 }, 00:18:04.233 "secure_channel": false, 00:18:04.233 "sock_impl": "ssl" 00:18:04.233 } 00:18:04.233 } 00:18:04.233 ] 00:18:04.233 } 00:18:04.233 ] 00:18:04.233 }' 00:18:04.233 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:04.233 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.233 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2583554 00:18:04.233 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:04.233 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2583554 00:18:04.233 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2583554 ']' 00:18:04.233 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.233 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:04.233 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.233 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:04.233 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.233 [2024-07-24 19:15:10.064006] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:18:04.233 [2024-07-24 19:15:10.064094] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.233 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.233 [2024-07-24 19:15:10.129241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.233 [2024-07-24 19:15:10.244745] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.233 [2024-07-24 19:15:10.244803] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.233 [2024-07-24 19:15:10.244819] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:04.233 [2024-07-24 19:15:10.244832] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:04.233 [2024-07-24 19:15:10.244844] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.233 [2024-07-24 19:15:10.244931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.491 [2024-07-24 19:15:10.473091] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.749 [2024-07-24 19:15:10.522307] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:04.749 [2024-07-24 19:15:10.522553] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.315 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:05.315 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:05.315 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:05.315 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:05.315 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.315 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.315 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2583685 00:18:05.315 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2583685 /var/tmp/bdevperf.sock 00:18:05.315 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2583685 ']' 00:18:05.315 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:05.315 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:05.315 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:05.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:05.315 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:05.315 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:05.315 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.315 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:18:05.315 "subsystems": [ 00:18:05.315 { 00:18:05.315 "subsystem": "keyring", 00:18:05.315 "config": [ 00:18:05.315 { 00:18:05.315 "method": "keyring_file_add_key", 00:18:05.315 "params": { 00:18:05.315 "name": "key0", 00:18:05.315 "path": "/tmp/tmp.AfjINMTOHw" 00:18:05.315 } 00:18:05.315 } 00:18:05.315 ] 00:18:05.315 }, 00:18:05.315 { 00:18:05.315 "subsystem": "iobuf", 00:18:05.315 "config": [ 00:18:05.315 { 00:18:05.315 "method": "iobuf_set_options", 00:18:05.315 "params": { 00:18:05.315 "small_pool_count": 8192, 00:18:05.315 "large_pool_count": 1024, 00:18:05.315 "small_bufsize": 8192, 00:18:05.315 "large_bufsize": 135168 00:18:05.315 } 00:18:05.315 } 00:18:05.315 ] 00:18:05.315 }, 00:18:05.315 { 00:18:05.315 "subsystem": "sock", 00:18:05.315 "config": [ 00:18:05.315 { 00:18:05.315 "method": "sock_set_default_impl", 00:18:05.315 "params": { 00:18:05.315 "impl_name": "posix" 00:18:05.315 } 00:18:05.315 }, 00:18:05.315 { 00:18:05.315 "method": "sock_impl_set_options", 00:18:05.315 "params": { 00:18:05.315 "impl_name": "ssl", 00:18:05.315 "recv_buf_size": 4096, 00:18:05.315 "send_buf_size": 4096, 00:18:05.315 "enable_recv_pipe": true, 00:18:05.315 "enable_quickack": false, 00:18:05.315 "enable_placement_id": 0, 00:18:05.315 "enable_zerocopy_send_server": true, 00:18:05.315 "enable_zerocopy_send_client": false, 00:18:05.315 "zerocopy_threshold": 0, 00:18:05.315 "tls_version": 0, 00:18:05.315 "enable_ktls": false 00:18:05.315 } 00:18:05.315 }, 00:18:05.315 { 00:18:05.315 "method": "sock_impl_set_options", 00:18:05.315 "params": { 00:18:05.315 "impl_name": "posix", 00:18:05.315 "recv_buf_size": 2097152, 00:18:05.315 "send_buf_size": 2097152, 00:18:05.315 "enable_recv_pipe": true, 00:18:05.315 "enable_quickack": false, 00:18:05.315 "enable_placement_id": 0, 00:18:05.315 "enable_zerocopy_send_server": true, 00:18:05.315 "enable_zerocopy_send_client": false, 00:18:05.315 "zerocopy_threshold": 0, 00:18:05.315 "tls_version": 0, 00:18:05.315 "enable_ktls": false 00:18:05.315 } 00:18:05.315 } 00:18:05.315 ] 00:18:05.315 }, 00:18:05.315 { 00:18:05.315 "subsystem": "vmd", 00:18:05.315 "config": [] 00:18:05.315 }, 00:18:05.315 { 00:18:05.315 "subsystem": "accel", 00:18:05.315 "config": [ 00:18:05.315 { 00:18:05.315 "method": "accel_set_options", 00:18:05.315 "params": { 00:18:05.315 "small_cache_size": 128, 00:18:05.315 "large_cache_size": 16, 00:18:05.315 "task_count": 2048, 00:18:05.315 "sequence_count": 2048, 00:18:05.315 "buf_count": 2048 00:18:05.315 } 00:18:05.315 } 00:18:05.315 ] 00:18:05.315 }, 00:18:05.315 { 00:18:05.315 "subsystem": "bdev", 00:18:05.315 "config": [ 00:18:05.315 { 00:18:05.315 "method": "bdev_set_options", 00:18:05.315 "params": { 00:18:05.315 "bdev_io_pool_size": 65535, 00:18:05.315 "bdev_io_cache_size": 256, 00:18:05.315 "bdev_auto_examine": true, 00:18:05.315 "iobuf_small_cache_size": 128, 00:18:05.315 "iobuf_large_cache_size": 16 00:18:05.316 } 00:18:05.316 }, 00:18:05.316 { 00:18:05.316 "method": "bdev_raid_set_options", 00:18:05.316 "params": { 00:18:05.316 "process_window_size_kb": 1024, 00:18:05.316 "process_max_bandwidth_mb_sec": 0 00:18:05.316 } 00:18:05.316 }, 00:18:05.316 { 00:18:05.316 "method": "bdev_iscsi_set_options", 00:18:05.316 "params": { 00:18:05.316 "timeout_sec": 30 00:18:05.316 } 00:18:05.316 }, 00:18:05.316 { 00:18:05.316 "method": "bdev_nvme_set_options", 00:18:05.316 "params": { 00:18:05.316 "action_on_timeout": "none", 00:18:05.316 "timeout_us": 0, 00:18:05.316 "timeout_admin_us": 0, 00:18:05.316 "keep_alive_timeout_ms": 10000, 00:18:05.316 "arbitration_burst": 0, 00:18:05.316 "low_priority_weight": 0, 00:18:05.316 "medium_priority_weight": 0, 00:18:05.316 "high_priority_weight": 0, 00:18:05.316 "nvme_adminq_poll_period_us": 10000, 00:18:05.316 "nvme_ioq_poll_period_us": 0, 00:18:05.316 "io_queue_requests": 512, 00:18:05.316 "delay_cmd_submit": true, 00:18:05.316 "transport_retry_count": 4, 00:18:05.316 "bdev_retry_count": 3, 00:18:05.316 "transport_ack_timeout": 0, 00:18:05.316 "ctrlr_loss_timeout_sec": 0, 00:18:05.316 "reconnect_delay_sec": 0, 00:18:05.316 "fast_io_fail_timeout_sec": 0, 00:18:05.316 "disable_auto_failback": false, 00:18:05.316 "generate_uuids": false, 00:18:05.316 "transport_tos": 0, 00:18:05.316 "nvme_error_stat": false, 00:18:05.316 "rdma_srq_size": 0, 00:18:05.316 "io_path_stat": false, 00:18:05.316 "allow_accel_sequence": false, 00:18:05.316 "rdma_max_cq_size": 0, 00:18:05.316 "rdma_cm_event_timeout_ms": 0, 00:18:05.316 "dhchap_digests": [ 00:18:05.316 "sha256", 00:18:05.316 "sha384", 00:18:05.316 "sha512" 00:18:05.316 ], 00:18:05.316 "dhchap_dhgroups": [ 00:18:05.316 "null", 00:18:05.316 "ffdhe2048", 00:18:05.316 "ffdhe3072", 00:18:05.316 "ffdhe4096", 00:18:05.316 "ffdhe6144", 00:18:05.316 "ffdhe8192" 00:18:05.316 ] 00:18:05.316 } 00:18:05.316 }, 00:18:05.316 { 00:18:05.316 "method": "bdev_nvme_attach_controller", 00:18:05.316 "params": { 00:18:05.316 "name": "nvme0", 00:18:05.316 "trtype": "TCP", 00:18:05.316 "adrfam": "IPv4", 00:18:05.316 "traddr": "10.0.0.2", 00:18:05.316 "trsvcid": "4420", 00:18:05.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.316 "prchk_reftag": false, 00:18:05.316 "prchk_guard": false, 00:18:05.316 "ctrlr_loss_timeout_sec": 0, 00:18:05.316 "reconnect_delay_sec": 0, 00:18:05.316 "fast_io_fail_timeout_sec": 0, 00:18:05.316 "psk": "key0", 00:18:05.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:05.316 "hdgst": false, 00:18:05.316 "ddgst": false 00:18:05.316 } 00:18:05.316 }, 00:18:05.316 { 00:18:05.316 "method": "bdev_nvme_set_hotplug", 00:18:05.316 "params": { 00:18:05.316 "period_us": 100000, 00:18:05.316 "enable": false 00:18:05.316 } 00:18:05.316 }, 00:18:05.316 { 00:18:05.316 "method": "bdev_enable_histogram", 00:18:05.316 "params": { 00:18:05.316 "name": "nvme0n1", 00:18:05.316 "enable": true 00:18:05.316 } 00:18:05.316 }, 00:18:05.316 { 00:18:05.316 "method": "bdev_wait_for_examine" 00:18:05.316 } 00:18:05.316 ] 00:18:05.316 }, 00:18:05.316 { 00:18:05.316 "subsystem": "nbd", 00:18:05.316 "config": [] 00:18:05.316 } 00:18:05.316 ] 00:18:05.316 }' 00:18:05.316 [2024-07-24 19:15:11.177517] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:18:05.316 [2024-07-24 19:15:11.177614] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2583685 ] 00:18:05.316 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.316 [2024-07-24 19:15:11.239188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.574 [2024-07-24 19:15:11.359041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.574 [2024-07-24 19:15:11.523478] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:06.514 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:06.514 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:06.514 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:06.514 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:18:06.514 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.514 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:06.774 Running I/O for 1 seconds... 00:18:07.708 00:18:07.708 Latency(us) 00:18:07.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.708 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:07.708 Verification LBA range: start 0x0 length 0x2000 00:18:07.708 nvme0n1 : 1.02 3229.93 12.62 0.00 0.00 39142.73 7136.14 50875.35 00:18:07.708 =================================================================================================================== 00:18:07.708 Total : 3229.93 12.62 0.00 0.00 39142.73 7136.14 50875.35 00:18:07.708 0 00:18:07.708 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:18:07.708 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:18:07.708 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:07.708 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:18:07.708 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:18:07.708 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:18:07.708 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:07.708 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:18:07.708 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:18:07.708 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:18:07.708 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:07.708 nvmf_trace.0 00:18:07.966 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:18:07.966 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2583685 00:18:07.966 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2583685 ']' 00:18:07.966 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2583685 00:18:07.966 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:07.966 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:07.966 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2583685 00:18:07.966 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:07.966 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:07.966 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2583685' 00:18:07.966 killing process with pid 2583685 00:18:07.966 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2583685 00:18:07.966 Received shutdown signal, test time was about 1.000000 seconds 00:18:07.966 00:18:07.966 Latency(us) 00:18:07.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.966 =================================================================================================================== 00:18:07.966 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:07.966 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2583685 00:18:08.225 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:08.225 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:08.225 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:08.225 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:08.225 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:08.225 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:08.225 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:08.225 rmmod nvme_tcp 00:18:08.225 rmmod nvme_fabrics 00:18:08.225 rmmod nvme_keyring 00:18:08.225 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:08.225 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:08.225 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:08.225 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2583554 ']' 00:18:08.225 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2583554 00:18:08.225 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2583554 ']' 00:18:08.225 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2583554 00:18:08.225 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:08.225 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:08.225 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2583554 00:18:08.225 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:08.225 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:08.225 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2583554' 00:18:08.225 killing process with pid 2583554 00:18:08.225 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2583554 00:18:08.225 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2583554 00:18:08.483 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:08.483 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:08.483 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:08.483 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:08.483 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:08.483 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.483 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:08.483 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.388 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:10.388 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.uzMo9VEw6m /tmp/tmp.5Dwf8nqjR2 /tmp/tmp.AfjINMTOHw 00:18:10.388 00:18:10.388 real 1m20.191s 00:18:10.388 user 2m12.924s 00:18:10.388 sys 0m24.428s 00:18:10.388 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:10.388 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.388 ************************************ 00:18:10.388 END TEST nvmf_tls 00:18:10.388 ************************************ 00:18:10.388 19:15:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:10.388 19:15:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:10.388 19:15:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:10.388 19:15:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:10.388 ************************************ 00:18:10.388 START TEST nvmf_fips 00:18:10.388 ************************************ 00:18:10.388 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:10.645 * Looking for test storage... 00:18:10.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:10.646 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:18:10.647 Error setting digest 00:18:10.647 0072A6448C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:10.647 0072A6448C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:10.647 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:12.549 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:12.549 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:12.549 Found net devices under 0000:08:00.0: cvl_0_0 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:12.549 Found net devices under 0000:08:00.1: cvl_0_1 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:12.549 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:12.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:12.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:18:12.550 00:18:12.550 --- 10.0.0.2 ping statistics --- 00:18:12.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.550 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:12.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:12.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:18:12.550 00:18:12.550 --- 10.0.0.1 ping statistics --- 00:18:12.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.550 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2585511 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2585511 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2585511 ']' 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:12.550 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:12.550 [2024-07-24 19:15:18.517902] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:18:12.550 [2024-07-24 19:15:18.518001] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.550 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.808 [2024-07-24 19:15:18.582868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.809 [2024-07-24 19:15:18.698060] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.809 [2024-07-24 19:15:18.698112] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.809 [2024-07-24 19:15:18.698129] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:12.809 [2024-07-24 19:15:18.698143] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:12.809 [2024-07-24 19:15:18.698155] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.809 [2024-07-24 19:15:18.698183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.809 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:12.809 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:18:12.809 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:12.809 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:12.809 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:13.066 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.066 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:13.066 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:13.066 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:13.066 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:13.066 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:13.066 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:13.066 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:13.066 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:13.325 [2024-07-24 19:15:19.132366] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:13.325 [2024-07-24 19:15:19.148355] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:13.325 [2024-07-24 19:15:19.148578] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.325 [2024-07-24 19:15:19.178102] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:13.325 malloc0 00:18:13.325 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.325 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2585549 00:18:13.325 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.325 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2585549 /var/tmp/bdevperf.sock 00:18:13.325 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2585549 ']' 00:18:13.325 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.325 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:13.325 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.325 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:13.325 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:13.325 [2024-07-24 19:15:19.264745] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:18:13.325 [2024-07-24 19:15:19.264828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2585549 ] 00:18:13.325 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.325 [2024-07-24 19:15:19.319109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.584 [2024-07-24 19:15:19.437476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.584 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:13.584 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:18:13.584 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:13.842 [2024-07-24 19:15:19.766013] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:13.842 [2024-07-24 19:15:19.766140] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:13.842 TLSTESTn1 00:18:14.100 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:14.100 Running I/O for 10 seconds... 00:18:24.063 00:18:24.063 Latency(us) 00:18:24.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.063 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:24.063 Verification LBA range: start 0x0 length 0x2000 00:18:24.063 TLSTESTn1 : 10.02 3336.11 13.03 0.00 0.00 38292.13 6505.05 54758.97 00:18:24.063 =================================================================================================================== 00:18:24.063 Total : 3336.11 13.03 0.00 0.00 38292.13 6505.05 54758.97 00:18:24.063 0 00:18:24.063 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:24.063 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:24.063 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:18:24.063 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:18:24.063 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:18:24.063 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:24.063 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:18:24.063 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:18:24.063 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:18:24.063 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:24.063 nvmf_trace.0 00:18:24.321 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:18:24.321 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2585549 00:18:24.321 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2585549 ']' 00:18:24.321 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2585549 00:18:24.321 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:18:24.321 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:24.321 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2585549 00:18:24.321 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:24.321 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:24.321 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2585549' 00:18:24.321 killing process with pid 2585549 00:18:24.321 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2585549 00:18:24.321 Received shutdown signal, test time was about 10.000000 seconds 00:18:24.321 00:18:24.321 Latency(us) 00:18:24.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.321 =================================================================================================================== 00:18:24.321 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:24.321 [2024-07-24 19:15:30.123225] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:24.321 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2585549 00:18:24.579 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:24.579 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:24.579 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:24.579 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:24.579 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:24.579 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:24.579 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:24.579 rmmod nvme_tcp 00:18:24.579 rmmod nvme_fabrics 00:18:24.579 rmmod nvme_keyring 00:18:24.579 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:24.579 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:24.579 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:24.579 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2585511 ']' 00:18:24.579 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2585511 00:18:24.579 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2585511 ']' 00:18:24.580 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2585511 00:18:24.580 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:18:24.580 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:24.580 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2585511 00:18:24.580 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:24.580 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:24.580 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2585511' 00:18:24.580 killing process with pid 2585511 00:18:24.580 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2585511 00:18:24.580 [2024-07-24 19:15:30.417310] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:24.580 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2585511 00:18:24.840 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:24.840 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:24.840 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:24.840 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:24.840 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:24.840 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.840 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.840 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.747 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:26.747 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:26.747 00:18:26.747 real 0m16.298s 00:18:26.747 user 0m21.848s 00:18:26.747 sys 0m4.841s 00:18:26.747 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:26.747 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:26.747 ************************************ 00:18:26.747 END TEST nvmf_fips 00:18:26.747 ************************************ 00:18:26.747 19:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:18:26.747 19:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:18:26.747 19:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:18:26.747 19:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:18:26.747 19:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:18:26.747 19:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:28.647 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:28.647 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:18:28.647 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:28.647 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:28.647 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:28.647 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:28.647 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:28.647 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:18:28.647 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:28.647 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:28.648 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:28.648 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:28.648 Found net devices under 0000:08:00.0: cvl_0_0 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:28.648 Found net devices under 0000:08:00.1: cvl_0_1 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:28.648 ************************************ 00:18:28.648 START TEST nvmf_perf_adq 00:18:28.648 ************************************ 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:28.648 * Looking for test storage... 00:18:28.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.648 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.649 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.649 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:18:28.649 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.649 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:18:28.649 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:28.649 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:28.649 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:28.649 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:28.649 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:28.649 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:28.649 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:28.649 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:28.649 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:28.649 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:28.649 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:30.025 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:30.026 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:30.026 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:30.026 Found net devices under 0000:08:00.0: cvl_0_0 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:30.026 Found net devices under 0000:08:00.1: cvl_0_1 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:18:30.026 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:18:30.593 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:18:32.506 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:18:37.819 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:18:37.819 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:37.819 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.819 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:37.819 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:37.819 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:37.819 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.819 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.819 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.819 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:37.819 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:37.819 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:37.819 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:37.819 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:37.819 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:37.819 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:37.819 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:37.819 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:37.819 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:37.819 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:37.819 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:37.819 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:37.820 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:37.820 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:37.820 Found net devices under 0000:08:00.0: cvl_0_0 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:37.820 Found net devices under 0000:08:00.1: cvl_0_1 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:37.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:37.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:18:37.820 00:18:37.820 --- 10.0.0.2 ping statistics --- 00:18:37.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.820 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:37.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:37.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:18:37.820 00:18:37.820 --- 10.0.0.1 ping statistics --- 00:18:37.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.820 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:37.820 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:37.821 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:37.821 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:37.821 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:37.821 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:37.821 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2589942 00:18:37.821 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:37.821 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2589942 00:18:37.821 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2589942 ']' 00:18:37.821 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.821 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:37.821 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.821 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:37.821 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:37.821 [2024-07-24 19:15:43.578212] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:18:37.821 [2024-07-24 19:15:43.578309] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.821 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.821 [2024-07-24 19:15:43.644261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:37.821 [2024-07-24 19:15:43.762607] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.821 [2024-07-24 19:15:43.762657] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.821 [2024-07-24 19:15:43.762673] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.821 [2024-07-24 19:15:43.762686] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.821 [2024-07-24 19:15:43.762698] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.821 [2024-07-24 19:15:43.762770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.821 [2024-07-24 19:15:43.762852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.821 [2024-07-24 19:15:43.762885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:37.821 [2024-07-24 19:15:43.762889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.821 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:37.821 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:18:37.821 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:37.821 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:37.821 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:38.080 [2024-07-24 19:15:43.991559] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.080 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:38.080 Malloc1 00:18:38.080 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.080 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:38.080 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.080 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:38.080 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.080 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:38.080 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.080 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:38.080 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.080 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:38.080 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.080 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:38.080 [2024-07-24 19:15:44.041791] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.080 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.080 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2590051 00:18:38.080 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:18:38.080 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:38.080 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.611 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:40.612 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.612 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:40.612 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.612 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:18:40.612 "tick_rate": 2700000000, 00:18:40.612 "poll_groups": [ 00:18:40.612 { 00:18:40.612 "name": "nvmf_tgt_poll_group_000", 00:18:40.612 "admin_qpairs": 1, 00:18:40.612 "io_qpairs": 1, 00:18:40.612 "current_admin_qpairs": 1, 00:18:40.612 "current_io_qpairs": 1, 00:18:40.612 "pending_bdev_io": 0, 00:18:40.612 "completed_nvme_io": 19239, 00:18:40.612 "transports": [ 00:18:40.612 { 00:18:40.612 "trtype": "TCP" 00:18:40.612 } 00:18:40.612 ] 00:18:40.612 }, 00:18:40.612 { 00:18:40.612 "name": "nvmf_tgt_poll_group_001", 00:18:40.612 "admin_qpairs": 0, 00:18:40.612 "io_qpairs": 1, 00:18:40.612 "current_admin_qpairs": 0, 00:18:40.612 "current_io_qpairs": 1, 00:18:40.612 "pending_bdev_io": 0, 00:18:40.612 "completed_nvme_io": 17442, 00:18:40.612 "transports": [ 00:18:40.612 { 00:18:40.612 "trtype": "TCP" 00:18:40.612 } 00:18:40.612 ] 00:18:40.612 }, 00:18:40.612 { 00:18:40.612 "name": "nvmf_tgt_poll_group_002", 00:18:40.612 "admin_qpairs": 0, 00:18:40.612 "io_qpairs": 1, 00:18:40.612 "current_admin_qpairs": 0, 00:18:40.612 "current_io_qpairs": 1, 00:18:40.612 "pending_bdev_io": 0, 00:18:40.612 "completed_nvme_io": 19299, 00:18:40.612 "transports": [ 00:18:40.612 { 00:18:40.612 "trtype": "TCP" 00:18:40.612 } 00:18:40.612 ] 00:18:40.612 }, 00:18:40.612 { 00:18:40.612 "name": "nvmf_tgt_poll_group_003", 00:18:40.612 "admin_qpairs": 0, 00:18:40.612 "io_qpairs": 1, 00:18:40.612 "current_admin_qpairs": 0, 00:18:40.612 "current_io_qpairs": 1, 00:18:40.612 "pending_bdev_io": 0, 00:18:40.612 "completed_nvme_io": 18706, 00:18:40.612 "transports": [ 00:18:40.612 { 00:18:40.612 "trtype": "TCP" 00:18:40.612 } 00:18:40.612 ] 00:18:40.612 } 00:18:40.612 ] 00:18:40.612 }' 00:18:40.612 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:18:40.612 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:18:40.612 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:18:40.612 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:18:40.612 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2590051 00:18:48.727 Initializing NVMe Controllers 00:18:48.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:48.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:18:48.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:18:48.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:18:48.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:18:48.727 Initialization complete. Launching workers. 00:18:48.727 ======================================================== 00:18:48.727 Latency(us) 00:18:48.727 Device Information : IOPS MiB/s Average min max 00:18:48.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9888.60 38.63 6474.60 4013.61 9110.19 00:18:48.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9227.40 36.04 6936.10 2523.83 10193.95 00:18:48.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10206.70 39.87 6272.58 3525.42 7764.81 00:18:48.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10134.20 39.59 6316.95 2203.74 11363.17 00:18:48.727 ======================================================== 00:18:48.727 Total : 39456.90 154.13 6489.78 2203.74 11363.17 00:18:48.727 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:48.727 rmmod nvme_tcp 00:18:48.727 rmmod nvme_fabrics 00:18:48.727 rmmod nvme_keyring 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2589942 ']' 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2589942 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2589942 ']' 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2589942 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2589942 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2589942' 00:18:48.727 killing process with pid 2589942 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2589942 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2589942 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.727 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.633 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:50.633 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:18:50.633 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:18:51.203 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:18:53.112 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:58.381 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:58.381 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:58.382 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:58.382 Found net devices under 0000:08:00.0: cvl_0_0 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:58.382 Found net devices under 0000:08:00.1: cvl_0_1 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:58.382 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:58.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:18:58.382 00:18:58.382 --- 10.0.0.2 ping statistics --- 00:18:58.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.382 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:58.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:18:58.382 00:18:58.382 --- 10.0.0.1 ping statistics --- 00:18:58.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.382 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:18:58.382 net.core.busy_poll = 1 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:18:58.382 net.core.busy_read = 1 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:58.382 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2592058 00:18:58.383 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:58.383 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2592058 00:18:58.383 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2592058 ']' 00:18:58.383 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.383 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:58.383 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.383 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:58.383 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:58.383 [2024-07-24 19:16:04.290589] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:18:58.383 [2024-07-24 19:16:04.290684] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.383 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.383 [2024-07-24 19:16:04.355363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:58.641 [2024-07-24 19:16:04.472369] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.641 [2024-07-24 19:16:04.472430] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.641 [2024-07-24 19:16:04.472446] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.641 [2024-07-24 19:16:04.472459] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.641 [2024-07-24 19:16:04.472471] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.641 [2024-07-24 19:16:04.475503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.641 [2024-07-24 19:16:04.475596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.641 [2024-07-24 19:16:04.475677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:58.641 [2024-07-24 19:16:04.475708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.641 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:58.641 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:18:58.641 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:58.641 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:58.641 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:58.641 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.641 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:18:58.641 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:18:58.641 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:18:58.641 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.641 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:58.641 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.641 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:18:58.641 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:18:58.641 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.641 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:58.641 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.641 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:18:58.641 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.641 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:58.900 [2024-07-24 19:16:04.715945] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:58.900 Malloc1 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:58.900 [2024-07-24 19:16:04.766355] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2592094 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:18:58.900 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:58.900 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.800 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:00.800 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.800 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:00.800 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.800 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:00.800 "tick_rate": 2700000000, 00:19:00.800 "poll_groups": [ 00:19:00.800 { 00:19:00.800 "name": "nvmf_tgt_poll_group_000", 00:19:00.800 "admin_qpairs": 1, 00:19:00.800 "io_qpairs": 1, 00:19:00.800 "current_admin_qpairs": 1, 00:19:00.800 "current_io_qpairs": 1, 00:19:00.800 "pending_bdev_io": 0, 00:19:00.800 "completed_nvme_io": 23094, 00:19:00.800 "transports": [ 00:19:00.800 { 00:19:00.800 "trtype": "TCP" 00:19:00.800 } 00:19:00.800 ] 00:19:00.800 }, 00:19:00.800 { 00:19:00.800 "name": "nvmf_tgt_poll_group_001", 00:19:00.800 "admin_qpairs": 0, 00:19:00.800 "io_qpairs": 3, 00:19:00.800 "current_admin_qpairs": 0, 00:19:00.800 "current_io_qpairs": 3, 00:19:00.800 "pending_bdev_io": 0, 00:19:00.800 "completed_nvme_io": 23134, 00:19:00.800 "transports": [ 00:19:00.800 { 00:19:00.800 "trtype": "TCP" 00:19:00.800 } 00:19:00.800 ] 00:19:00.800 }, 00:19:00.800 { 00:19:00.800 "name": "nvmf_tgt_poll_group_002", 00:19:00.800 "admin_qpairs": 0, 00:19:00.800 "io_qpairs": 0, 00:19:00.800 "current_admin_qpairs": 0, 00:19:00.800 "current_io_qpairs": 0, 00:19:00.800 "pending_bdev_io": 0, 00:19:00.800 "completed_nvme_io": 0, 00:19:00.800 "transports": [ 00:19:00.800 { 00:19:00.800 "trtype": "TCP" 00:19:00.800 } 00:19:00.800 ] 00:19:00.800 }, 00:19:00.800 { 00:19:00.800 "name": "nvmf_tgt_poll_group_003", 00:19:00.800 "admin_qpairs": 0, 00:19:00.800 "io_qpairs": 0, 00:19:00.800 "current_admin_qpairs": 0, 00:19:00.800 "current_io_qpairs": 0, 00:19:00.800 "pending_bdev_io": 0, 00:19:00.800 "completed_nvme_io": 0, 00:19:00.800 "transports": [ 00:19:00.800 { 00:19:00.800 "trtype": "TCP" 00:19:00.800 } 00:19:00.800 ] 00:19:00.800 } 00:19:00.800 ] 00:19:00.800 }' 00:19:00.800 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:00.800 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:01.057 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:01.057 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:01.057 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2592094 00:19:09.178 Initializing NVMe Controllers 00:19:09.178 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:09.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:09.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:09.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:09.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:09.178 Initialization complete. Launching workers. 00:19:09.178 ======================================================== 00:19:09.178 Latency(us) 00:19:09.178 Device Information : IOPS MiB/s Average min max 00:19:09.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 3797.80 14.84 16867.63 2652.58 63900.06 00:19:09.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12192.20 47.63 5249.15 1786.37 7825.05 00:19:09.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4103.10 16.03 15608.20 2067.37 62080.28 00:19:09.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4266.90 16.67 15012.47 2606.91 63724.71 00:19:09.178 ======================================================== 00:19:09.178 Total : 24360.00 95.16 10515.49 1786.37 63900.06 00:19:09.178 00:19:09.178 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:09.178 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:09.178 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:09.178 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:09.178 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:09.178 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:09.178 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:09.178 rmmod nvme_tcp 00:19:09.178 rmmod nvme_fabrics 00:19:09.178 rmmod nvme_keyring 00:19:09.178 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:09.178 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:09.178 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:09.178 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2592058 ']' 00:19:09.178 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2592058 00:19:09.178 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2592058 ']' 00:19:09.178 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2592058 00:19:09.178 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:19:09.178 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:09.178 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2592058 00:19:09.178 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:09.178 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:09.178 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2592058' 00:19:09.178 killing process with pid 2592058 00:19:09.178 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2592058 00:19:09.178 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2592058 00:19:09.438 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:09.438 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:09.438 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:09.438 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:09.438 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:09.438 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.438 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:09.438 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:12.734 00:19:12.734 real 0m44.049s 00:19:12.734 user 2m35.976s 00:19:12.734 sys 0m10.586s 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:12.734 ************************************ 00:19:12.734 END TEST nvmf_perf_adq 00:19:12.734 ************************************ 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:12.734 ************************************ 00:19:12.734 START TEST nvmf_shutdown 00:19:12.734 ************************************ 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:12.734 * Looking for test storage... 00:19:12.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:12.734 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:12.735 ************************************ 00:19:12.735 START TEST nvmf_shutdown_tc1 00:19:12.735 ************************************ 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:12.735 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:14.114 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:14.114 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:14.114 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:14.115 Found net devices under 0000:08:00.0: cvl_0_0 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:14.115 Found net devices under 0000:08:00.1: cvl_0_1 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:14.115 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.375 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.375 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.375 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.375 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:14.375 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.375 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.375 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:14.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:19:14.376 00:19:14.376 --- 10.0.0.2 ping statistics --- 00:19:14.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.376 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:19:14.376 00:19:14.376 --- 10.0.0.1 ping statistics --- 00:19:14.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.376 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2594612 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2594612 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2594612 ']' 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:14.376 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:14.376 [2024-07-24 19:16:20.318006] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:19:14.376 [2024-07-24 19:16:20.318106] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.376 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.639 [2024-07-24 19:16:20.390461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:14.639 [2024-07-24 19:16:20.510107] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.639 [2024-07-24 19:16:20.510171] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.639 [2024-07-24 19:16:20.510187] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.639 [2024-07-24 19:16:20.510200] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.639 [2024-07-24 19:16:20.510212] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.639 [2024-07-24 19:16:20.510286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.639 [2024-07-24 19:16:20.510361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:14.639 [2024-07-24 19:16:20.510365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.639 [2024-07-24 19:16:20.510338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:14.639 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:14.639 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:19:14.639 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:14.639 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:14.639 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:14.939 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.939 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:14.939 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.939 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:14.939 [2024-07-24 19:16:20.665808] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.939 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.939 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:14.939 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:14.939 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:14.939 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.940 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:14.940 Malloc1 00:19:14.940 [2024-07-24 19:16:20.756544] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.940 Malloc2 00:19:14.940 Malloc3 00:19:14.940 Malloc4 00:19:14.940 Malloc5 00:19:15.225 Malloc6 00:19:15.225 Malloc7 00:19:15.225 Malloc8 00:19:15.225 Malloc9 00:19:15.225 Malloc10 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2594763 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2594763 /var/tmp/bdevperf.sock 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2594763 ']' 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:15.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:15.225 { 00:19:15.225 "params": { 00:19:15.225 "name": "Nvme$subsystem", 00:19:15.225 "trtype": "$TEST_TRANSPORT", 00:19:15.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.225 "adrfam": "ipv4", 00:19:15.225 "trsvcid": "$NVMF_PORT", 00:19:15.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.225 "hdgst": ${hdgst:-false}, 00:19:15.225 "ddgst": ${ddgst:-false} 00:19:15.225 }, 00:19:15.225 "method": "bdev_nvme_attach_controller" 00:19:15.225 } 00:19:15.225 EOF 00:19:15.225 )") 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:15.225 { 00:19:15.225 "params": { 00:19:15.225 "name": "Nvme$subsystem", 00:19:15.225 "trtype": "$TEST_TRANSPORT", 00:19:15.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.225 "adrfam": "ipv4", 00:19:15.225 "trsvcid": "$NVMF_PORT", 00:19:15.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.225 "hdgst": ${hdgst:-false}, 00:19:15.225 "ddgst": ${ddgst:-false} 00:19:15.225 }, 00:19:15.225 "method": "bdev_nvme_attach_controller" 00:19:15.225 } 00:19:15.225 EOF 00:19:15.225 )") 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:15.225 { 00:19:15.225 "params": { 00:19:15.225 "name": "Nvme$subsystem", 00:19:15.225 "trtype": "$TEST_TRANSPORT", 00:19:15.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.225 "adrfam": "ipv4", 00:19:15.225 "trsvcid": "$NVMF_PORT", 00:19:15.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.225 "hdgst": ${hdgst:-false}, 00:19:15.225 "ddgst": ${ddgst:-false} 00:19:15.225 }, 00:19:15.225 "method": "bdev_nvme_attach_controller" 00:19:15.225 } 00:19:15.225 EOF 00:19:15.225 )") 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:15.225 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:15.225 { 00:19:15.225 "params": { 00:19:15.225 "name": "Nvme$subsystem", 00:19:15.225 "trtype": "$TEST_TRANSPORT", 00:19:15.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.226 "adrfam": "ipv4", 00:19:15.226 "trsvcid": "$NVMF_PORT", 00:19:15.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.226 "hdgst": ${hdgst:-false}, 00:19:15.226 "ddgst": ${ddgst:-false} 00:19:15.226 }, 00:19:15.226 "method": "bdev_nvme_attach_controller" 00:19:15.226 } 00:19:15.226 EOF 00:19:15.226 )") 00:19:15.226 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:15.226 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:15.226 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:15.226 { 00:19:15.226 "params": { 00:19:15.226 "name": "Nvme$subsystem", 00:19:15.226 "trtype": "$TEST_TRANSPORT", 00:19:15.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.226 "adrfam": "ipv4", 00:19:15.226 "trsvcid": "$NVMF_PORT", 00:19:15.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.226 "hdgst": ${hdgst:-false}, 00:19:15.226 "ddgst": ${ddgst:-false} 00:19:15.226 }, 00:19:15.226 "method": "bdev_nvme_attach_controller" 00:19:15.226 } 00:19:15.226 EOF 00:19:15.226 )") 00:19:15.226 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:15.226 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:15.226 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:15.226 { 00:19:15.226 "params": { 00:19:15.226 "name": "Nvme$subsystem", 00:19:15.226 "trtype": "$TEST_TRANSPORT", 00:19:15.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.226 "adrfam": "ipv4", 00:19:15.226 "trsvcid": "$NVMF_PORT", 00:19:15.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.226 "hdgst": ${hdgst:-false}, 00:19:15.226 "ddgst": ${ddgst:-false} 00:19:15.226 }, 00:19:15.226 "method": "bdev_nvme_attach_controller" 00:19:15.226 } 00:19:15.226 EOF 00:19:15.226 )") 00:19:15.226 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:15.226 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:15.226 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:15.226 { 00:19:15.226 "params": { 00:19:15.226 "name": "Nvme$subsystem", 00:19:15.226 "trtype": "$TEST_TRANSPORT", 00:19:15.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.226 "adrfam": "ipv4", 00:19:15.226 "trsvcid": "$NVMF_PORT", 00:19:15.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.226 "hdgst": ${hdgst:-false}, 00:19:15.226 "ddgst": ${ddgst:-false} 00:19:15.226 }, 00:19:15.226 "method": "bdev_nvme_attach_controller" 00:19:15.226 } 00:19:15.226 EOF 00:19:15.226 )") 00:19:15.226 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:15.226 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:15.226 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:15.226 { 00:19:15.226 "params": { 00:19:15.226 "name": "Nvme$subsystem", 00:19:15.226 "trtype": "$TEST_TRANSPORT", 00:19:15.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.226 "adrfam": "ipv4", 00:19:15.226 "trsvcid": "$NVMF_PORT", 00:19:15.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.226 "hdgst": ${hdgst:-false}, 00:19:15.226 "ddgst": ${ddgst:-false} 00:19:15.226 }, 00:19:15.226 "method": "bdev_nvme_attach_controller" 00:19:15.226 } 00:19:15.226 EOF 00:19:15.226 )") 00:19:15.226 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:15.226 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:15.226 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:15.226 { 00:19:15.226 "params": { 00:19:15.226 "name": "Nvme$subsystem", 00:19:15.226 "trtype": "$TEST_TRANSPORT", 00:19:15.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.226 "adrfam": "ipv4", 00:19:15.226 "trsvcid": "$NVMF_PORT", 00:19:15.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.226 "hdgst": ${hdgst:-false}, 00:19:15.226 "ddgst": ${ddgst:-false} 00:19:15.226 }, 00:19:15.226 "method": "bdev_nvme_attach_controller" 00:19:15.226 } 00:19:15.226 EOF 00:19:15.226 )") 00:19:15.226 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:15.226 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:15.226 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:15.226 { 00:19:15.226 "params": { 00:19:15.226 "name": "Nvme$subsystem", 00:19:15.226 "trtype": "$TEST_TRANSPORT", 00:19:15.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.226 "adrfam": "ipv4", 00:19:15.226 "trsvcid": "$NVMF_PORT", 00:19:15.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.226 "hdgst": ${hdgst:-false}, 00:19:15.226 "ddgst": ${ddgst:-false} 00:19:15.226 }, 00:19:15.226 "method": "bdev_nvme_attach_controller" 00:19:15.226 } 00:19:15.226 EOF 00:19:15.226 )") 00:19:15.226 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:15.226 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:15.226 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:15.226 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:15.226 "params": { 00:19:15.226 "name": "Nvme1", 00:19:15.226 "trtype": "tcp", 00:19:15.226 "traddr": "10.0.0.2", 00:19:15.226 "adrfam": "ipv4", 00:19:15.226 "trsvcid": "4420", 00:19:15.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:15.226 "hdgst": false, 00:19:15.226 "ddgst": false 00:19:15.226 }, 00:19:15.226 "method": "bdev_nvme_attach_controller" 00:19:15.226 },{ 00:19:15.226 "params": { 00:19:15.226 "name": "Nvme2", 00:19:15.226 "trtype": "tcp", 00:19:15.226 "traddr": "10.0.0.2", 00:19:15.226 "adrfam": "ipv4", 00:19:15.226 "trsvcid": "4420", 00:19:15.226 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:15.226 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:15.226 "hdgst": false, 00:19:15.226 "ddgst": false 00:19:15.226 }, 00:19:15.226 "method": "bdev_nvme_attach_controller" 00:19:15.226 },{ 00:19:15.226 "params": { 00:19:15.226 "name": "Nvme3", 00:19:15.226 "trtype": "tcp", 00:19:15.226 "traddr": "10.0.0.2", 00:19:15.226 "adrfam": "ipv4", 00:19:15.226 "trsvcid": "4420", 00:19:15.226 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:15.226 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:15.226 "hdgst": false, 00:19:15.226 "ddgst": false 00:19:15.226 }, 00:19:15.226 "method": "bdev_nvme_attach_controller" 00:19:15.226 },{ 00:19:15.226 "params": { 00:19:15.226 "name": "Nvme4", 00:19:15.226 "trtype": "tcp", 00:19:15.226 "traddr": "10.0.0.2", 00:19:15.226 "adrfam": "ipv4", 00:19:15.226 "trsvcid": "4420", 00:19:15.226 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:15.226 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:15.226 "hdgst": false, 00:19:15.226 "ddgst": false 00:19:15.226 }, 00:19:15.226 "method": "bdev_nvme_attach_controller" 00:19:15.226 },{ 00:19:15.226 "params": { 00:19:15.226 "name": "Nvme5", 00:19:15.226 "trtype": "tcp", 00:19:15.226 "traddr": "10.0.0.2", 00:19:15.226 "adrfam": "ipv4", 00:19:15.226 "trsvcid": "4420", 00:19:15.226 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:15.226 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:15.227 "hdgst": false, 00:19:15.227 "ddgst": false 00:19:15.227 }, 00:19:15.227 "method": "bdev_nvme_attach_controller" 00:19:15.227 },{ 00:19:15.227 "params": { 00:19:15.227 "name": "Nvme6", 00:19:15.227 "trtype": "tcp", 00:19:15.227 "traddr": "10.0.0.2", 00:19:15.227 "adrfam": "ipv4", 00:19:15.227 "trsvcid": "4420", 00:19:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:15.227 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:15.227 "hdgst": false, 00:19:15.227 "ddgst": false 00:19:15.227 }, 00:19:15.227 "method": "bdev_nvme_attach_controller" 00:19:15.227 },{ 00:19:15.227 "params": { 00:19:15.227 "name": "Nvme7", 00:19:15.227 "trtype": "tcp", 00:19:15.227 "traddr": "10.0.0.2", 00:19:15.227 "adrfam": "ipv4", 00:19:15.227 "trsvcid": "4420", 00:19:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:15.227 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:15.227 "hdgst": false, 00:19:15.227 "ddgst": false 00:19:15.227 }, 00:19:15.227 "method": "bdev_nvme_attach_controller" 00:19:15.227 },{ 00:19:15.227 "params": { 00:19:15.227 "name": "Nvme8", 00:19:15.227 "trtype": "tcp", 00:19:15.227 "traddr": "10.0.0.2", 00:19:15.227 "adrfam": "ipv4", 00:19:15.227 "trsvcid": "4420", 00:19:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:15.227 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:15.227 "hdgst": false, 00:19:15.227 "ddgst": false 00:19:15.227 }, 00:19:15.227 "method": "bdev_nvme_attach_controller" 00:19:15.227 },{ 00:19:15.227 "params": { 00:19:15.227 "name": "Nvme9", 00:19:15.227 "trtype": "tcp", 00:19:15.227 "traddr": "10.0.0.2", 00:19:15.227 "adrfam": "ipv4", 00:19:15.227 "trsvcid": "4420", 00:19:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:15.227 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:15.227 "hdgst": false, 00:19:15.227 "ddgst": false 00:19:15.227 }, 00:19:15.227 "method": "bdev_nvme_attach_controller" 00:19:15.227 },{ 00:19:15.227 "params": { 00:19:15.227 "name": "Nvme10", 00:19:15.227 "trtype": "tcp", 00:19:15.227 "traddr": "10.0.0.2", 00:19:15.227 "adrfam": "ipv4", 00:19:15.227 "trsvcid": "4420", 00:19:15.227 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:15.227 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:15.227 "hdgst": false, 00:19:15.227 "ddgst": false 00:19:15.227 }, 00:19:15.227 "method": "bdev_nvme_attach_controller" 00:19:15.227 }' 00:19:15.487 [2024-07-24 19:16:21.245867] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:19:15.487 [2024-07-24 19:16:21.245955] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:15.487 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.487 [2024-07-24 19:16:21.308703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.487 [2024-07-24 19:16:21.425642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.396 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:17.396 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:19:17.396 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:17.396 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.396 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:17.396 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.396 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2594763 00:19:17.396 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:17.396 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:18.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2594763 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:18.330 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2594612 00:19:18.330 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:18.330 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:18.330 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:18.330 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:18.330 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.330 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.330 { 00:19:18.330 "params": { 00:19:18.330 "name": "Nvme$subsystem", 00:19:18.330 "trtype": "$TEST_TRANSPORT", 00:19:18.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.330 "adrfam": "ipv4", 00:19:18.330 "trsvcid": "$NVMF_PORT", 00:19:18.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.330 "hdgst": ${hdgst:-false}, 00:19:18.330 "ddgst": ${ddgst:-false} 00:19:18.330 }, 00:19:18.330 "method": "bdev_nvme_attach_controller" 00:19:18.330 } 00:19:18.330 EOF 00:19:18.330 )") 00:19:18.330 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:18.330 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.330 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.330 { 00:19:18.330 "params": { 00:19:18.330 "name": "Nvme$subsystem", 00:19:18.330 "trtype": "$TEST_TRANSPORT", 00:19:18.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.330 "adrfam": "ipv4", 00:19:18.330 "trsvcid": "$NVMF_PORT", 00:19:18.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.330 "hdgst": ${hdgst:-false}, 00:19:18.330 "ddgst": ${ddgst:-false} 00:19:18.330 }, 00:19:18.330 "method": "bdev_nvme_attach_controller" 00:19:18.330 } 00:19:18.330 EOF 00:19:18.330 )") 00:19:18.330 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:18.330 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.330 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.330 { 00:19:18.330 "params": { 00:19:18.330 "name": "Nvme$subsystem", 00:19:18.330 "trtype": "$TEST_TRANSPORT", 00:19:18.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.330 "adrfam": "ipv4", 00:19:18.330 "trsvcid": "$NVMF_PORT", 00:19:18.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.330 "hdgst": ${hdgst:-false}, 00:19:18.330 "ddgst": ${ddgst:-false} 00:19:18.330 }, 00:19:18.330 "method": "bdev_nvme_attach_controller" 00:19:18.330 } 00:19:18.330 EOF 00:19:18.330 )") 00:19:18.330 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:18.330 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.330 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.330 { 00:19:18.330 "params": { 00:19:18.330 "name": "Nvme$subsystem", 00:19:18.330 "trtype": "$TEST_TRANSPORT", 00:19:18.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.330 "adrfam": "ipv4", 00:19:18.330 "trsvcid": "$NVMF_PORT", 00:19:18.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.330 "hdgst": ${hdgst:-false}, 00:19:18.330 "ddgst": ${ddgst:-false} 00:19:18.330 }, 00:19:18.330 "method": "bdev_nvme_attach_controller" 00:19:18.330 } 00:19:18.330 EOF 00:19:18.330 )") 00:19:18.330 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:18.330 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.330 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.330 { 00:19:18.330 "params": { 00:19:18.330 "name": "Nvme$subsystem", 00:19:18.331 "trtype": "$TEST_TRANSPORT", 00:19:18.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.331 "adrfam": "ipv4", 00:19:18.331 "trsvcid": "$NVMF_PORT", 00:19:18.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.331 "hdgst": ${hdgst:-false}, 00:19:18.331 "ddgst": ${ddgst:-false} 00:19:18.331 }, 00:19:18.331 "method": "bdev_nvme_attach_controller" 00:19:18.331 } 00:19:18.331 EOF 00:19:18.331 )") 00:19:18.331 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:18.331 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.331 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.331 { 00:19:18.331 "params": { 00:19:18.331 "name": "Nvme$subsystem", 00:19:18.331 "trtype": "$TEST_TRANSPORT", 00:19:18.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.331 "adrfam": "ipv4", 00:19:18.331 "trsvcid": "$NVMF_PORT", 00:19:18.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.331 "hdgst": ${hdgst:-false}, 00:19:18.331 "ddgst": ${ddgst:-false} 00:19:18.331 }, 00:19:18.331 "method": "bdev_nvme_attach_controller" 00:19:18.331 } 00:19:18.331 EOF 00:19:18.331 )") 00:19:18.331 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:18.331 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.331 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.331 { 00:19:18.331 "params": { 00:19:18.331 "name": "Nvme$subsystem", 00:19:18.331 "trtype": "$TEST_TRANSPORT", 00:19:18.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.331 "adrfam": "ipv4", 00:19:18.331 "trsvcid": "$NVMF_PORT", 00:19:18.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.331 "hdgst": ${hdgst:-false}, 00:19:18.331 "ddgst": ${ddgst:-false} 00:19:18.331 }, 00:19:18.331 "method": "bdev_nvme_attach_controller" 00:19:18.331 } 00:19:18.331 EOF 00:19:18.331 )") 00:19:18.331 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:18.331 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.331 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.331 { 00:19:18.331 "params": { 00:19:18.331 "name": "Nvme$subsystem", 00:19:18.331 "trtype": "$TEST_TRANSPORT", 00:19:18.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.331 "adrfam": "ipv4", 00:19:18.331 "trsvcid": "$NVMF_PORT", 00:19:18.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.331 "hdgst": ${hdgst:-false}, 00:19:18.331 "ddgst": ${ddgst:-false} 00:19:18.331 }, 00:19:18.331 "method": "bdev_nvme_attach_controller" 00:19:18.331 } 00:19:18.331 EOF 00:19:18.331 )") 00:19:18.331 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:18.331 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.331 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.331 { 00:19:18.331 "params": { 00:19:18.331 "name": "Nvme$subsystem", 00:19:18.331 "trtype": "$TEST_TRANSPORT", 00:19:18.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.331 "adrfam": "ipv4", 00:19:18.331 "trsvcid": "$NVMF_PORT", 00:19:18.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.331 "hdgst": ${hdgst:-false}, 00:19:18.331 "ddgst": ${ddgst:-false} 00:19:18.331 }, 00:19:18.331 "method": "bdev_nvme_attach_controller" 00:19:18.331 } 00:19:18.331 EOF 00:19:18.331 )") 00:19:18.331 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:18.331 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.331 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.331 { 00:19:18.331 "params": { 00:19:18.331 "name": "Nvme$subsystem", 00:19:18.331 "trtype": "$TEST_TRANSPORT", 00:19:18.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.331 "adrfam": "ipv4", 00:19:18.331 "trsvcid": "$NVMF_PORT", 00:19:18.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.331 "hdgst": ${hdgst:-false}, 00:19:18.331 "ddgst": ${ddgst:-false} 00:19:18.331 }, 00:19:18.331 "method": "bdev_nvme_attach_controller" 00:19:18.331 } 00:19:18.331 EOF 00:19:18.331 )") 00:19:18.331 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:18.592 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:18.592 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:18.592 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:18.592 "params": { 00:19:18.592 "name": "Nvme1", 00:19:18.592 "trtype": "tcp", 00:19:18.592 "traddr": "10.0.0.2", 00:19:18.592 "adrfam": "ipv4", 00:19:18.592 "trsvcid": "4420", 00:19:18.592 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:18.592 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:18.592 "hdgst": false, 00:19:18.592 "ddgst": false 00:19:18.592 }, 00:19:18.592 "method": "bdev_nvme_attach_controller" 00:19:18.592 },{ 00:19:18.592 "params": { 00:19:18.592 "name": "Nvme2", 00:19:18.592 "trtype": "tcp", 00:19:18.592 "traddr": "10.0.0.2", 00:19:18.592 "adrfam": "ipv4", 00:19:18.592 "trsvcid": "4420", 00:19:18.592 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:18.592 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:18.592 "hdgst": false, 00:19:18.592 "ddgst": false 00:19:18.592 }, 00:19:18.592 "method": "bdev_nvme_attach_controller" 00:19:18.592 },{ 00:19:18.592 "params": { 00:19:18.592 "name": "Nvme3", 00:19:18.592 "trtype": "tcp", 00:19:18.592 "traddr": "10.0.0.2", 00:19:18.592 "adrfam": "ipv4", 00:19:18.592 "trsvcid": "4420", 00:19:18.592 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:18.592 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:18.592 "hdgst": false, 00:19:18.592 "ddgst": false 00:19:18.592 }, 00:19:18.592 "method": "bdev_nvme_attach_controller" 00:19:18.592 },{ 00:19:18.592 "params": { 00:19:18.592 "name": "Nvme4", 00:19:18.592 "trtype": "tcp", 00:19:18.592 "traddr": "10.0.0.2", 00:19:18.592 "adrfam": "ipv4", 00:19:18.592 "trsvcid": "4420", 00:19:18.592 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:18.592 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:18.592 "hdgst": false, 00:19:18.592 "ddgst": false 00:19:18.592 }, 00:19:18.592 "method": "bdev_nvme_attach_controller" 00:19:18.592 },{ 00:19:18.592 "params": { 00:19:18.592 "name": "Nvme5", 00:19:18.592 "trtype": "tcp", 00:19:18.592 "traddr": "10.0.0.2", 00:19:18.592 "adrfam": "ipv4", 00:19:18.592 "trsvcid": "4420", 00:19:18.592 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:18.592 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:18.592 "hdgst": false, 00:19:18.592 "ddgst": false 00:19:18.592 }, 00:19:18.592 "method": "bdev_nvme_attach_controller" 00:19:18.592 },{ 00:19:18.592 "params": { 00:19:18.592 "name": "Nvme6", 00:19:18.592 "trtype": "tcp", 00:19:18.592 "traddr": "10.0.0.2", 00:19:18.592 "adrfam": "ipv4", 00:19:18.592 "trsvcid": "4420", 00:19:18.592 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:18.592 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:18.592 "hdgst": false, 00:19:18.592 "ddgst": false 00:19:18.592 }, 00:19:18.592 "method": "bdev_nvme_attach_controller" 00:19:18.592 },{ 00:19:18.592 "params": { 00:19:18.592 "name": "Nvme7", 00:19:18.592 "trtype": "tcp", 00:19:18.592 "traddr": "10.0.0.2", 00:19:18.592 "adrfam": "ipv4", 00:19:18.592 "trsvcid": "4420", 00:19:18.592 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:18.592 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:18.592 "hdgst": false, 00:19:18.592 "ddgst": false 00:19:18.592 }, 00:19:18.592 "method": "bdev_nvme_attach_controller" 00:19:18.592 },{ 00:19:18.592 "params": { 00:19:18.592 "name": "Nvme8", 00:19:18.592 "trtype": "tcp", 00:19:18.592 "traddr": "10.0.0.2", 00:19:18.592 "adrfam": "ipv4", 00:19:18.592 "trsvcid": "4420", 00:19:18.592 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:18.592 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:18.592 "hdgst": false, 00:19:18.592 "ddgst": false 00:19:18.592 }, 00:19:18.592 "method": "bdev_nvme_attach_controller" 00:19:18.592 },{ 00:19:18.592 "params": { 00:19:18.592 "name": "Nvme9", 00:19:18.592 "trtype": "tcp", 00:19:18.592 "traddr": "10.0.0.2", 00:19:18.592 "adrfam": "ipv4", 00:19:18.592 "trsvcid": "4420", 00:19:18.592 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:18.592 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:18.592 "hdgst": false, 00:19:18.592 "ddgst": false 00:19:18.592 }, 00:19:18.592 "method": "bdev_nvme_attach_controller" 00:19:18.592 },{ 00:19:18.592 "params": { 00:19:18.592 "name": "Nvme10", 00:19:18.592 "trtype": "tcp", 00:19:18.592 "traddr": "10.0.0.2", 00:19:18.592 "adrfam": "ipv4", 00:19:18.592 "trsvcid": "4420", 00:19:18.592 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:18.592 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:18.592 "hdgst": false, 00:19:18.592 "ddgst": false 00:19:18.592 }, 00:19:18.592 "method": "bdev_nvme_attach_controller" 00:19:18.592 }' 00:19:18.592 [2024-07-24 19:16:24.359286] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:19:18.592 [2024-07-24 19:16:24.359380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2595085 ] 00:19:18.592 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.592 [2024-07-24 19:16:24.424071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.592 [2024-07-24 19:16:24.543837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.971 Running I/O for 1 seconds... 00:19:21.349 00:19:21.349 Latency(us) 00:19:21.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.349 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:21.349 Verification LBA range: start 0x0 length 0x400 00:19:21.349 Nvme1n1 : 1.12 171.25 10.70 0.00 0.00 369302.95 23592.96 323116.75 00:19:21.349 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:21.349 Verification LBA range: start 0x0 length 0x400 00:19:21.349 Nvme2n1 : 1.21 162.83 10.18 0.00 0.00 379909.14 4344.79 349525.33 00:19:21.349 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:21.349 Verification LBA range: start 0x0 length 0x400 00:19:21.349 Nvme3n1 : 1.13 169.92 10.62 0.00 0.00 356993.96 22816.24 321563.31 00:19:21.349 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:21.349 Verification LBA range: start 0x0 length 0x400 00:19:21.349 Nvme4n1 : 1.22 210.26 13.14 0.00 0.00 282599.35 24369.68 293601.28 00:19:21.349 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:21.349 Verification LBA range: start 0x0 length 0x400 00:19:21.349 Nvme5n1 : 1.23 208.77 13.05 0.00 0.00 280192.76 20291.89 323116.75 00:19:21.349 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:21.349 Verification LBA range: start 0x0 length 0x400 00:19:21.349 Nvme6n1 : 1.13 169.47 10.59 0.00 0.00 334876.82 41943.04 301368.51 00:19:21.349 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:21.349 Verification LBA range: start 0x0 length 0x400 00:19:21.349 Nvme7n1 : 1.23 207.99 13.00 0.00 0.00 269666.99 17185.00 324670.20 00:19:21.349 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:21.349 Verification LBA range: start 0x0 length 0x400 00:19:21.349 Nvme8n1 : 1.28 200.69 12.54 0.00 0.00 265439.00 20680.25 296708.17 00:19:21.349 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:21.349 Verification LBA range: start 0x0 length 0x400 00:19:21.349 Nvme9n1 : 1.24 206.68 12.92 0.00 0.00 260380.63 15825.73 313796.08 00:19:21.349 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:21.349 Verification LBA range: start 0x0 length 0x400 00:19:21.349 Nvme10n1 : 1.22 157.29 9.83 0.00 0.00 333962.30 23010.42 358846.01 00:19:21.349 =================================================================================================================== 00:19:21.349 Total : 1865.15 116.57 0.00 0.00 307540.12 4344.79 358846.01 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:21.610 rmmod nvme_tcp 00:19:21.610 rmmod nvme_fabrics 00:19:21.610 rmmod nvme_keyring 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2594612 ']' 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2594612 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 2594612 ']' 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 2594612 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2594612 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2594612' 00:19:21.610 killing process with pid 2594612 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 2594612 00:19:21.610 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 2594612 00:19:22.182 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:22.182 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:22.182 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:22.182 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:22.182 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:22.182 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.182 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.182 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.093 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:24.093 00:19:24.093 real 0m11.492s 00:19:24.093 user 0m34.241s 00:19:24.093 sys 0m2.902s 00:19:24.093 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:24.093 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:24.093 ************************************ 00:19:24.093 END TEST nvmf_shutdown_tc1 00:19:24.093 ************************************ 00:19:24.093 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:24.093 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:24.093 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:24.093 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:24.093 ************************************ 00:19:24.093 START TEST nvmf_shutdown_tc2 00:19:24.093 ************************************ 00:19:24.093 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:19:24.093 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:24.093 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:24.093 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:24.093 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:24.093 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:24.093 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:24.093 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:24.093 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.093 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:24.093 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.093 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:24.094 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:24.094 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:24.094 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:24.094 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:24.094 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:24.094 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:24.094 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:24.094 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:24.094 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:24.094 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:24.094 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:24.094 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:24.094 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:24.094 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:24.094 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:24.094 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:24.094 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:24.094 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:24.094 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:24.094 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:24.094 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:24.094 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:24.094 Found net devices under 0000:08:00.0: cvl_0_0 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:24.094 Found net devices under 0000:08:00.1: cvl_0_1 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:24.094 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:24.095 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:24.095 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:24.095 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:24.095 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:24.095 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:24.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:19:24.353 00:19:24.353 --- 10.0.0.2 ping statistics --- 00:19:24.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.353 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:24.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:19:24.353 00:19:24.353 --- 10.0.0.1 ping statistics --- 00:19:24.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.353 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2595694 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2595694 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2595694 ']' 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:24.353 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:24.353 [2024-07-24 19:16:30.201562] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:19:24.353 [2024-07-24 19:16:30.201651] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.353 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.353 [2024-07-24 19:16:30.263776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:24.611 [2024-07-24 19:16:30.368193] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.611 [2024-07-24 19:16:30.368244] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.611 [2024-07-24 19:16:30.368258] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.611 [2024-07-24 19:16:30.368269] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.611 [2024-07-24 19:16:30.368279] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.611 [2024-07-24 19:16:30.368325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.611 [2024-07-24 19:16:30.368426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:24.612 [2024-07-24 19:16:30.368429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.612 [2024-07-24 19:16:30.368374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:24.612 [2024-07-24 19:16:30.508307] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.612 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:24.612 Malloc1 00:19:24.612 [2024-07-24 19:16:30.589296] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.612 Malloc2 00:19:24.870 Malloc3 00:19:24.870 Malloc4 00:19:24.870 Malloc5 00:19:24.870 Malloc6 00:19:24.870 Malloc7 00:19:25.128 Malloc8 00:19:25.128 Malloc9 00:19:25.128 Malloc10 00:19:25.128 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.128 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:25.128 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:25.128 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2595836 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2595836 /var/tmp/bdevperf.sock 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2595836 ']' 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:25.128 { 00:19:25.128 "params": { 00:19:25.128 "name": "Nvme$subsystem", 00:19:25.128 "trtype": "$TEST_TRANSPORT", 00:19:25.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.128 "adrfam": "ipv4", 00:19:25.128 "trsvcid": "$NVMF_PORT", 00:19:25.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.128 "hdgst": ${hdgst:-false}, 00:19:25.128 "ddgst": ${ddgst:-false} 00:19:25.128 }, 00:19:25.128 "method": "bdev_nvme_attach_controller" 00:19:25.128 } 00:19:25.128 EOF 00:19:25.128 )") 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:25.128 { 00:19:25.128 "params": { 00:19:25.128 "name": "Nvme$subsystem", 00:19:25.128 "trtype": "$TEST_TRANSPORT", 00:19:25.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.128 "adrfam": "ipv4", 00:19:25.128 "trsvcid": "$NVMF_PORT", 00:19:25.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.128 "hdgst": ${hdgst:-false}, 00:19:25.128 "ddgst": ${ddgst:-false} 00:19:25.128 }, 00:19:25.128 "method": "bdev_nvme_attach_controller" 00:19:25.128 } 00:19:25.128 EOF 00:19:25.128 )") 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:25.128 { 00:19:25.128 "params": { 00:19:25.128 "name": "Nvme$subsystem", 00:19:25.128 "trtype": "$TEST_TRANSPORT", 00:19:25.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.128 "adrfam": "ipv4", 00:19:25.128 "trsvcid": "$NVMF_PORT", 00:19:25.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.128 "hdgst": ${hdgst:-false}, 00:19:25.128 "ddgst": ${ddgst:-false} 00:19:25.128 }, 00:19:25.128 "method": "bdev_nvme_attach_controller" 00:19:25.128 } 00:19:25.128 EOF 00:19:25.128 )") 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:25.128 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:25.128 { 00:19:25.128 "params": { 00:19:25.128 "name": "Nvme$subsystem", 00:19:25.128 "trtype": "$TEST_TRANSPORT", 00:19:25.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.128 "adrfam": "ipv4", 00:19:25.128 "trsvcid": "$NVMF_PORT", 00:19:25.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.128 "hdgst": ${hdgst:-false}, 00:19:25.128 "ddgst": ${ddgst:-false} 00:19:25.128 }, 00:19:25.128 "method": "bdev_nvme_attach_controller" 00:19:25.128 } 00:19:25.128 EOF 00:19:25.128 )") 00:19:25.129 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:25.129 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:25.129 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:25.129 { 00:19:25.129 "params": { 00:19:25.129 "name": "Nvme$subsystem", 00:19:25.129 "trtype": "$TEST_TRANSPORT", 00:19:25.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.129 "adrfam": "ipv4", 00:19:25.129 "trsvcid": "$NVMF_PORT", 00:19:25.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.129 "hdgst": ${hdgst:-false}, 00:19:25.129 "ddgst": ${ddgst:-false} 00:19:25.129 }, 00:19:25.129 "method": "bdev_nvme_attach_controller" 00:19:25.129 } 00:19:25.129 EOF 00:19:25.129 )") 00:19:25.129 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:25.129 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:25.129 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:25.129 { 00:19:25.129 "params": { 00:19:25.129 "name": "Nvme$subsystem", 00:19:25.129 "trtype": "$TEST_TRANSPORT", 00:19:25.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.129 "adrfam": "ipv4", 00:19:25.129 "trsvcid": "$NVMF_PORT", 00:19:25.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.129 "hdgst": ${hdgst:-false}, 00:19:25.129 "ddgst": ${ddgst:-false} 00:19:25.129 }, 00:19:25.129 "method": "bdev_nvme_attach_controller" 00:19:25.129 } 00:19:25.129 EOF 00:19:25.129 )") 00:19:25.129 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:25.129 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:25.129 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:25.129 { 00:19:25.129 "params": { 00:19:25.129 "name": "Nvme$subsystem", 00:19:25.129 "trtype": "$TEST_TRANSPORT", 00:19:25.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.129 "adrfam": "ipv4", 00:19:25.129 "trsvcid": "$NVMF_PORT", 00:19:25.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.129 "hdgst": ${hdgst:-false}, 00:19:25.129 "ddgst": ${ddgst:-false} 00:19:25.129 }, 00:19:25.129 "method": "bdev_nvme_attach_controller" 00:19:25.129 } 00:19:25.129 EOF 00:19:25.129 )") 00:19:25.129 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:25.129 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:25.129 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:25.129 { 00:19:25.129 "params": { 00:19:25.129 "name": "Nvme$subsystem", 00:19:25.129 "trtype": "$TEST_TRANSPORT", 00:19:25.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.129 "adrfam": "ipv4", 00:19:25.129 "trsvcid": "$NVMF_PORT", 00:19:25.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.129 "hdgst": ${hdgst:-false}, 00:19:25.129 "ddgst": ${ddgst:-false} 00:19:25.129 }, 00:19:25.129 "method": "bdev_nvme_attach_controller" 00:19:25.129 } 00:19:25.129 EOF 00:19:25.129 )") 00:19:25.129 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:25.129 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:25.129 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:25.129 { 00:19:25.129 "params": { 00:19:25.129 "name": "Nvme$subsystem", 00:19:25.129 "trtype": "$TEST_TRANSPORT", 00:19:25.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.129 "adrfam": "ipv4", 00:19:25.129 "trsvcid": "$NVMF_PORT", 00:19:25.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.129 "hdgst": ${hdgst:-false}, 00:19:25.129 "ddgst": ${ddgst:-false} 00:19:25.129 }, 00:19:25.129 "method": "bdev_nvme_attach_controller" 00:19:25.129 } 00:19:25.129 EOF 00:19:25.129 )") 00:19:25.129 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:25.129 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:25.129 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:25.129 { 00:19:25.129 "params": { 00:19:25.129 "name": "Nvme$subsystem", 00:19:25.129 "trtype": "$TEST_TRANSPORT", 00:19:25.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.129 "adrfam": "ipv4", 00:19:25.129 "trsvcid": "$NVMF_PORT", 00:19:25.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.129 "hdgst": ${hdgst:-false}, 00:19:25.129 "ddgst": ${ddgst:-false} 00:19:25.129 }, 00:19:25.129 "method": "bdev_nvme_attach_controller" 00:19:25.129 } 00:19:25.129 EOF 00:19:25.129 )") 00:19:25.129 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:25.129 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:19:25.129 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:19:25.129 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:25.129 "params": { 00:19:25.129 "name": "Nvme1", 00:19:25.129 "trtype": "tcp", 00:19:25.129 "traddr": "10.0.0.2", 00:19:25.129 "adrfam": "ipv4", 00:19:25.129 "trsvcid": "4420", 00:19:25.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:25.129 "hdgst": false, 00:19:25.129 "ddgst": false 00:19:25.129 }, 00:19:25.129 "method": "bdev_nvme_attach_controller" 00:19:25.129 },{ 00:19:25.129 "params": { 00:19:25.129 "name": "Nvme2", 00:19:25.129 "trtype": "tcp", 00:19:25.129 "traddr": "10.0.0.2", 00:19:25.129 "adrfam": "ipv4", 00:19:25.129 "trsvcid": "4420", 00:19:25.129 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:25.129 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:25.129 "hdgst": false, 00:19:25.129 "ddgst": false 00:19:25.129 }, 00:19:25.129 "method": "bdev_nvme_attach_controller" 00:19:25.129 },{ 00:19:25.129 "params": { 00:19:25.129 "name": "Nvme3", 00:19:25.129 "trtype": "tcp", 00:19:25.129 "traddr": "10.0.0.2", 00:19:25.129 "adrfam": "ipv4", 00:19:25.129 "trsvcid": "4420", 00:19:25.129 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:25.129 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:25.129 "hdgst": false, 00:19:25.129 "ddgst": false 00:19:25.129 }, 00:19:25.129 "method": "bdev_nvme_attach_controller" 00:19:25.129 },{ 00:19:25.129 "params": { 00:19:25.129 "name": "Nvme4", 00:19:25.129 "trtype": "tcp", 00:19:25.129 "traddr": "10.0.0.2", 00:19:25.129 "adrfam": "ipv4", 00:19:25.129 "trsvcid": "4420", 00:19:25.129 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:25.129 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:25.129 "hdgst": false, 00:19:25.129 "ddgst": false 00:19:25.129 }, 00:19:25.129 "method": "bdev_nvme_attach_controller" 00:19:25.129 },{ 00:19:25.129 "params": { 00:19:25.129 "name": "Nvme5", 00:19:25.129 "trtype": "tcp", 00:19:25.129 "traddr": "10.0.0.2", 00:19:25.129 "adrfam": "ipv4", 00:19:25.129 "trsvcid": "4420", 00:19:25.129 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:25.129 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:25.129 "hdgst": false, 00:19:25.129 "ddgst": false 00:19:25.129 }, 00:19:25.129 "method": "bdev_nvme_attach_controller" 00:19:25.129 },{ 00:19:25.129 "params": { 00:19:25.129 "name": "Nvme6", 00:19:25.129 "trtype": "tcp", 00:19:25.129 "traddr": "10.0.0.2", 00:19:25.129 "adrfam": "ipv4", 00:19:25.129 "trsvcid": "4420", 00:19:25.129 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:25.129 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:25.129 "hdgst": false, 00:19:25.129 "ddgst": false 00:19:25.129 }, 00:19:25.129 "method": "bdev_nvme_attach_controller" 00:19:25.129 },{ 00:19:25.129 "params": { 00:19:25.129 "name": "Nvme7", 00:19:25.129 "trtype": "tcp", 00:19:25.129 "traddr": "10.0.0.2", 00:19:25.129 "adrfam": "ipv4", 00:19:25.129 "trsvcid": "4420", 00:19:25.129 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:25.129 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:25.129 "hdgst": false, 00:19:25.129 "ddgst": false 00:19:25.129 }, 00:19:25.129 "method": "bdev_nvme_attach_controller" 00:19:25.129 },{ 00:19:25.129 "params": { 00:19:25.129 "name": "Nvme8", 00:19:25.129 "trtype": "tcp", 00:19:25.129 "traddr": "10.0.0.2", 00:19:25.130 "adrfam": "ipv4", 00:19:25.130 "trsvcid": "4420", 00:19:25.130 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:25.130 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:25.130 "hdgst": false, 00:19:25.130 "ddgst": false 00:19:25.130 }, 00:19:25.130 "method": "bdev_nvme_attach_controller" 00:19:25.130 },{ 00:19:25.130 "params": { 00:19:25.130 "name": "Nvme9", 00:19:25.130 "trtype": "tcp", 00:19:25.130 "traddr": "10.0.0.2", 00:19:25.130 "adrfam": "ipv4", 00:19:25.130 "trsvcid": "4420", 00:19:25.130 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:25.130 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:25.130 "hdgst": false, 00:19:25.130 "ddgst": false 00:19:25.130 }, 00:19:25.130 "method": "bdev_nvme_attach_controller" 00:19:25.130 },{ 00:19:25.130 "params": { 00:19:25.130 "name": "Nvme10", 00:19:25.130 "trtype": "tcp", 00:19:25.130 "traddr": "10.0.0.2", 00:19:25.130 "adrfam": "ipv4", 00:19:25.130 "trsvcid": "4420", 00:19:25.130 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:25.130 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:25.130 "hdgst": false, 00:19:25.130 "ddgst": false 00:19:25.130 }, 00:19:25.130 "method": "bdev_nvme_attach_controller" 00:19:25.130 }' 00:19:25.130 [2024-07-24 19:16:31.069622] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:19:25.130 [2024-07-24 19:16:31.069713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2595836 ] 00:19:25.130 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.130 [2024-07-24 19:16:31.127407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.388 [2024-07-24 19:16:31.227206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.759 Running I/O for 10 seconds... 00:19:27.326 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:27.326 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:19:27.326 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:27.326 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.326 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:27.326 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.326 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:27.326 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:27.326 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:27.326 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:19:27.326 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:19:27.326 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:27.326 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:27.326 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:27.326 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:27.326 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.326 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:27.326 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.326 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:27.326 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:27.326 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:27.583 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:27.583 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:27.583 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:27.583 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:27.583 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.583 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:27.583 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.583 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:27.583 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:27.583 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:27.841 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:27.841 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:27.841 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:27.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:27.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:27.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:27.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:27.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:19:27.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:19:27.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:19:27.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2595836 00:19:27.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2595836 ']' 00:19:27.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2595836 00:19:27.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:19:27.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:27.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2595836 00:19:27.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:27.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:27.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2595836' 00:19:27.842 killing process with pid 2595836 00:19:27.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2595836 00:19:27.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2595836 00:19:28.101 Received shutdown signal, test time was about 1.151797 seconds 00:19:28.101 00:19:28.101 Latency(us) 00:19:28.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.101 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:28.101 Verification LBA range: start 0x0 length 0x400 00:19:28.101 Nvme1n1 : 1.11 179.81 11.24 0.00 0.00 346851.22 4271.98 313796.08 00:19:28.101 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:28.101 Verification LBA range: start 0x0 length 0x400 00:19:28.101 Nvme2n1 : 1.10 174.02 10.88 0.00 0.00 357193.96 25243.50 312242.63 00:19:28.101 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:28.101 Verification LBA range: start 0x0 length 0x400 00:19:28.101 Nvme3n1 : 1.14 224.33 14.02 0.00 0.00 272758.33 23301.69 307582.29 00:19:28.101 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:28.101 Verification LBA range: start 0x0 length 0x400 00:19:28.101 Nvme4n1 : 1.13 225.80 14.11 0.00 0.00 265992.15 14660.65 316902.97 00:19:28.101 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:28.101 Verification LBA range: start 0x0 length 0x400 00:19:28.101 Nvme5n1 : 1.15 222.44 13.90 0.00 0.00 265645.70 20777.34 324670.20 00:19:28.101 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:28.101 Verification LBA range: start 0x0 length 0x400 00:19:28.101 Nvme6n1 : 1.12 170.99 10.69 0.00 0.00 338689.39 24369.68 320009.86 00:19:28.101 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:28.101 Verification LBA range: start 0x0 length 0x400 00:19:28.101 Nvme7n1 : 1.10 174.44 10.90 0.00 0.00 324970.57 32428.18 313796.08 00:19:28.101 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:28.101 Verification LBA range: start 0x0 length 0x400 00:19:28.101 Nvme8n1 : 1.15 223.36 13.96 0.00 0.00 250341.83 21456.97 321563.31 00:19:28.101 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:28.101 Verification LBA range: start 0x0 length 0x400 00:19:28.101 Nvme9n1 : 1.12 171.72 10.73 0.00 0.00 318418.99 34564.17 315349.52 00:19:28.101 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:28.101 Verification LBA range: start 0x0 length 0x400 00:19:28.101 Nvme10n1 : 1.13 169.91 10.62 0.00 0.00 315889.08 25049.32 365059.79 00:19:28.101 =================================================================================================================== 00:19:28.101 Total : 1936.82 121.05 0.00 0.00 300903.97 4271.98 365059.79 00:19:28.359 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2595694 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:29.293 rmmod nvme_tcp 00:19:29.293 rmmod nvme_fabrics 00:19:29.293 rmmod nvme_keyring 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2595694 ']' 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2595694 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2595694 ']' 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2595694 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2595694 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2595694' 00:19:29.293 killing process with pid 2595694 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2595694 00:19:29.293 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2595694 00:19:29.859 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:29.859 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:29.859 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:29.859 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:29.859 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:29.859 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.859 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.859 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:31.766 00:19:31.766 real 0m7.691s 00:19:31.766 user 0m23.322s 00:19:31.766 sys 0m1.493s 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:31.766 ************************************ 00:19:31.766 END TEST nvmf_shutdown_tc2 00:19:31.766 ************************************ 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:31.766 ************************************ 00:19:31.766 START TEST nvmf_shutdown_tc3 00:19:31.766 ************************************ 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:31.766 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:31.767 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:31.767 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:31.767 Found net devices under 0000:08:00.0: cvl_0_0 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:31.767 Found net devices under 0000:08:00.1: cvl_0_1 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:31.767 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:32.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:32.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:19:32.025 00:19:32.025 --- 10.0.0.2 ping statistics --- 00:19:32.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.025 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:32.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:32.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:19:32.025 00:19:32.025 --- 10.0.0.1 ping statistics --- 00:19:32.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.025 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2596559 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2596559 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2596559 ']' 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:32.025 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:32.025 [2024-07-24 19:16:37.964858] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:19:32.025 [2024-07-24 19:16:37.964951] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.025 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.025 [2024-07-24 19:16:38.020343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:32.283 [2024-07-24 19:16:38.126053] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.283 [2024-07-24 19:16:38.126113] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.283 [2024-07-24 19:16:38.126140] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.283 [2024-07-24 19:16:38.126151] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.283 [2024-07-24 19:16:38.126161] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.283 [2024-07-24 19:16:38.126252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.283 [2024-07-24 19:16:38.126298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:32.283 [2024-07-24 19:16:38.126351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:32.283 [2024-07-24 19:16:38.126353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.283 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:32.283 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:19:32.283 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:32.283 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:32.283 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:32.283 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.283 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:32.283 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.283 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:32.283 [2024-07-24 19:16:38.274377] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.283 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.283 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:32.283 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:32.283 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:32.284 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:32.284 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:32.284 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:32.284 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:32.284 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:32.284 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:32.284 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:32.284 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:32.284 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:32.284 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:32.541 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:32.541 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:32.541 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:32.541 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:32.541 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:32.541 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:32.541 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:32.541 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:32.541 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:32.541 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:32.541 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:32.541 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:32.541 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:32.541 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.541 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:32.541 Malloc1 00:19:32.541 [2024-07-24 19:16:38.355310] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.541 Malloc2 00:19:32.541 Malloc3 00:19:32.541 Malloc4 00:19:32.541 Malloc5 00:19:32.541 Malloc6 00:19:32.799 Malloc7 00:19:32.799 Malloc8 00:19:32.799 Malloc9 00:19:32.799 Malloc10 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2596704 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2596704 /var/tmp/bdevperf.sock 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2596704 ']' 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:32.799 { 00:19:32.799 "params": { 00:19:32.799 "name": "Nvme$subsystem", 00:19:32.799 "trtype": "$TEST_TRANSPORT", 00:19:32.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.799 "adrfam": "ipv4", 00:19:32.799 "trsvcid": "$NVMF_PORT", 00:19:32.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.799 "hdgst": ${hdgst:-false}, 00:19:32.799 "ddgst": ${ddgst:-false} 00:19:32.799 }, 00:19:32.799 "method": "bdev_nvme_attach_controller" 00:19:32.799 } 00:19:32.799 EOF 00:19:32.799 )") 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:32.799 { 00:19:32.799 "params": { 00:19:32.799 "name": "Nvme$subsystem", 00:19:32.799 "trtype": "$TEST_TRANSPORT", 00:19:32.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.799 "adrfam": "ipv4", 00:19:32.799 "trsvcid": "$NVMF_PORT", 00:19:32.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.799 "hdgst": ${hdgst:-false}, 00:19:32.799 "ddgst": ${ddgst:-false} 00:19:32.799 }, 00:19:32.799 "method": "bdev_nvme_attach_controller" 00:19:32.799 } 00:19:32.799 EOF 00:19:32.799 )") 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:32.799 { 00:19:32.799 "params": { 00:19:32.799 "name": "Nvme$subsystem", 00:19:32.799 "trtype": "$TEST_TRANSPORT", 00:19:32.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.799 "adrfam": "ipv4", 00:19:32.799 "trsvcid": "$NVMF_PORT", 00:19:32.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.799 "hdgst": ${hdgst:-false}, 00:19:32.799 "ddgst": ${ddgst:-false} 00:19:32.799 }, 00:19:32.799 "method": "bdev_nvme_attach_controller" 00:19:32.799 } 00:19:32.799 EOF 00:19:32.799 )") 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:32.799 { 00:19:32.799 "params": { 00:19:32.799 "name": "Nvme$subsystem", 00:19:32.799 "trtype": "$TEST_TRANSPORT", 00:19:32.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.799 "adrfam": "ipv4", 00:19:32.799 "trsvcid": "$NVMF_PORT", 00:19:32.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.799 "hdgst": ${hdgst:-false}, 00:19:32.799 "ddgst": ${ddgst:-false} 00:19:32.799 }, 00:19:32.799 "method": "bdev_nvme_attach_controller" 00:19:32.799 } 00:19:32.799 EOF 00:19:32.799 )") 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:32.799 { 00:19:32.799 "params": { 00:19:32.799 "name": "Nvme$subsystem", 00:19:32.799 "trtype": "$TEST_TRANSPORT", 00:19:32.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.799 "adrfam": "ipv4", 00:19:32.799 "trsvcid": "$NVMF_PORT", 00:19:32.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.799 "hdgst": ${hdgst:-false}, 00:19:32.799 "ddgst": ${ddgst:-false} 00:19:32.799 }, 00:19:32.799 "method": "bdev_nvme_attach_controller" 00:19:32.799 } 00:19:32.799 EOF 00:19:32.799 )") 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:32.799 { 00:19:32.799 "params": { 00:19:32.799 "name": "Nvme$subsystem", 00:19:32.799 "trtype": "$TEST_TRANSPORT", 00:19:32.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.799 "adrfam": "ipv4", 00:19:32.799 "trsvcid": "$NVMF_PORT", 00:19:32.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.799 "hdgst": ${hdgst:-false}, 00:19:32.799 "ddgst": ${ddgst:-false} 00:19:32.799 }, 00:19:32.799 "method": "bdev_nvme_attach_controller" 00:19:32.799 } 00:19:32.799 EOF 00:19:32.799 )") 00:19:32.799 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:33.057 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:33.057 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:33.057 { 00:19:33.057 "params": { 00:19:33.057 "name": "Nvme$subsystem", 00:19:33.057 "trtype": "$TEST_TRANSPORT", 00:19:33.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.057 "adrfam": "ipv4", 00:19:33.057 "trsvcid": "$NVMF_PORT", 00:19:33.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.057 "hdgst": ${hdgst:-false}, 00:19:33.057 "ddgst": ${ddgst:-false} 00:19:33.057 }, 00:19:33.057 "method": "bdev_nvme_attach_controller" 00:19:33.057 } 00:19:33.057 EOF 00:19:33.057 )") 00:19:33.057 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:33.057 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:33.057 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:33.057 { 00:19:33.057 "params": { 00:19:33.057 "name": "Nvme$subsystem", 00:19:33.057 "trtype": "$TEST_TRANSPORT", 00:19:33.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.057 "adrfam": "ipv4", 00:19:33.057 "trsvcid": "$NVMF_PORT", 00:19:33.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.057 "hdgst": ${hdgst:-false}, 00:19:33.057 "ddgst": ${ddgst:-false} 00:19:33.057 }, 00:19:33.057 "method": "bdev_nvme_attach_controller" 00:19:33.057 } 00:19:33.057 EOF 00:19:33.057 )") 00:19:33.057 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:33.057 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:33.057 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:33.057 { 00:19:33.057 "params": { 00:19:33.057 "name": "Nvme$subsystem", 00:19:33.057 "trtype": "$TEST_TRANSPORT", 00:19:33.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.057 "adrfam": "ipv4", 00:19:33.057 "trsvcid": "$NVMF_PORT", 00:19:33.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.057 "hdgst": ${hdgst:-false}, 00:19:33.057 "ddgst": ${ddgst:-false} 00:19:33.057 }, 00:19:33.057 "method": "bdev_nvme_attach_controller" 00:19:33.057 } 00:19:33.057 EOF 00:19:33.057 )") 00:19:33.057 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:33.057 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:33.057 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:33.057 { 00:19:33.057 "params": { 00:19:33.057 "name": "Nvme$subsystem", 00:19:33.057 "trtype": "$TEST_TRANSPORT", 00:19:33.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.057 "adrfam": "ipv4", 00:19:33.057 "trsvcid": "$NVMF_PORT", 00:19:33.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.057 "hdgst": ${hdgst:-false}, 00:19:33.057 "ddgst": ${ddgst:-false} 00:19:33.057 }, 00:19:33.057 "method": "bdev_nvme_attach_controller" 00:19:33.057 } 00:19:33.057 EOF 00:19:33.057 )") 00:19:33.057 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:33.057 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:19:33.057 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:19:33.057 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:33.057 "params": { 00:19:33.057 "name": "Nvme1", 00:19:33.057 "trtype": "tcp", 00:19:33.057 "traddr": "10.0.0.2", 00:19:33.057 "adrfam": "ipv4", 00:19:33.057 "trsvcid": "4420", 00:19:33.057 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.057 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:33.057 "hdgst": false, 00:19:33.057 "ddgst": false 00:19:33.057 }, 00:19:33.057 "method": "bdev_nvme_attach_controller" 00:19:33.057 },{ 00:19:33.057 "params": { 00:19:33.057 "name": "Nvme2", 00:19:33.057 "trtype": "tcp", 00:19:33.057 "traddr": "10.0.0.2", 00:19:33.057 "adrfam": "ipv4", 00:19:33.057 "trsvcid": "4420", 00:19:33.057 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:33.057 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:33.057 "hdgst": false, 00:19:33.057 "ddgst": false 00:19:33.057 }, 00:19:33.057 "method": "bdev_nvme_attach_controller" 00:19:33.057 },{ 00:19:33.057 "params": { 00:19:33.057 "name": "Nvme3", 00:19:33.057 "trtype": "tcp", 00:19:33.057 "traddr": "10.0.0.2", 00:19:33.057 "adrfam": "ipv4", 00:19:33.057 "trsvcid": "4420", 00:19:33.057 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:33.057 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:33.057 "hdgst": false, 00:19:33.057 "ddgst": false 00:19:33.057 }, 00:19:33.057 "method": "bdev_nvme_attach_controller" 00:19:33.057 },{ 00:19:33.057 "params": { 00:19:33.057 "name": "Nvme4", 00:19:33.057 "trtype": "tcp", 00:19:33.057 "traddr": "10.0.0.2", 00:19:33.057 "adrfam": "ipv4", 00:19:33.057 "trsvcid": "4420", 00:19:33.057 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:33.057 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:33.057 "hdgst": false, 00:19:33.057 "ddgst": false 00:19:33.057 }, 00:19:33.057 "method": "bdev_nvme_attach_controller" 00:19:33.057 },{ 00:19:33.057 "params": { 00:19:33.057 "name": "Nvme5", 00:19:33.057 "trtype": "tcp", 00:19:33.057 "traddr": "10.0.0.2", 00:19:33.057 "adrfam": "ipv4", 00:19:33.057 "trsvcid": "4420", 00:19:33.057 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:33.058 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:33.058 "hdgst": false, 00:19:33.058 "ddgst": false 00:19:33.058 }, 00:19:33.058 "method": "bdev_nvme_attach_controller" 00:19:33.058 },{ 00:19:33.058 "params": { 00:19:33.058 "name": "Nvme6", 00:19:33.058 "trtype": "tcp", 00:19:33.058 "traddr": "10.0.0.2", 00:19:33.058 "adrfam": "ipv4", 00:19:33.058 "trsvcid": "4420", 00:19:33.058 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:33.058 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:33.058 "hdgst": false, 00:19:33.058 "ddgst": false 00:19:33.058 }, 00:19:33.058 "method": "bdev_nvme_attach_controller" 00:19:33.058 },{ 00:19:33.058 "params": { 00:19:33.058 "name": "Nvme7", 00:19:33.058 "trtype": "tcp", 00:19:33.058 "traddr": "10.0.0.2", 00:19:33.058 "adrfam": "ipv4", 00:19:33.058 "trsvcid": "4420", 00:19:33.058 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:33.058 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:33.058 "hdgst": false, 00:19:33.058 "ddgst": false 00:19:33.058 }, 00:19:33.058 "method": "bdev_nvme_attach_controller" 00:19:33.058 },{ 00:19:33.058 "params": { 00:19:33.058 "name": "Nvme8", 00:19:33.058 "trtype": "tcp", 00:19:33.058 "traddr": "10.0.0.2", 00:19:33.058 "adrfam": "ipv4", 00:19:33.058 "trsvcid": "4420", 00:19:33.058 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:33.058 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:33.058 "hdgst": false, 00:19:33.058 "ddgst": false 00:19:33.058 }, 00:19:33.058 "method": "bdev_nvme_attach_controller" 00:19:33.058 },{ 00:19:33.058 "params": { 00:19:33.058 "name": "Nvme9", 00:19:33.058 "trtype": "tcp", 00:19:33.058 "traddr": "10.0.0.2", 00:19:33.058 "adrfam": "ipv4", 00:19:33.058 "trsvcid": "4420", 00:19:33.058 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:33.058 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:33.058 "hdgst": false, 00:19:33.058 "ddgst": false 00:19:33.058 }, 00:19:33.058 "method": "bdev_nvme_attach_controller" 00:19:33.058 },{ 00:19:33.058 "params": { 00:19:33.058 "name": "Nvme10", 00:19:33.058 "trtype": "tcp", 00:19:33.058 "traddr": "10.0.0.2", 00:19:33.058 "adrfam": "ipv4", 00:19:33.058 "trsvcid": "4420", 00:19:33.058 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:33.058 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:33.058 "hdgst": false, 00:19:33.058 "ddgst": false 00:19:33.058 }, 00:19:33.058 "method": "bdev_nvme_attach_controller" 00:19:33.058 }' 00:19:33.058 [2024-07-24 19:16:38.840271] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:19:33.058 [2024-07-24 19:16:38.840363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2596704 ] 00:19:33.058 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.058 [2024-07-24 19:16:38.898149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.058 [2024-07-24 19:16:38.997917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.428 Running I/O for 10 seconds... 00:19:34.993 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:34.993 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:19:34.993 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:34.993 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.993 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:34.993 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.993 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:34.993 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:34.993 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:34.993 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:34.993 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:19:34.993 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:19:34.993 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:34.993 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:34.993 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:34.993 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:34.993 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.993 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:34.993 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.252 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:35.252 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:35.252 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:35.252 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:35.252 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:35.252 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:35.252 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:35.252 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.252 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:35.525 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.525 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:35.525 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:35.525 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:19:35.525 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:19:35.525 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:19:35.525 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2596559 00:19:35.525 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2596559 ']' 00:19:35.525 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2596559 00:19:35.525 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:19:35.525 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:35.525 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2596559 00:19:35.525 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:35.525 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:35.525 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2596559' 00:19:35.525 killing process with pid 2596559 00:19:35.525 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 2596559 00:19:35.525 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 2596559 00:19:35.525 [2024-07-24 19:16:41.333264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.525 [2024-07-24 19:16:41.333817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.333831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.333844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.333863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.333877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.333890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.333904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.333917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.333930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.333943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.333956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.333970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.333983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.333996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.334010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.334023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.334036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.334049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.334062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.334075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.334088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.334101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.334115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.334128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.334141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.334154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.334168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.334181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.334194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.334207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.334224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.334237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852d80 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.526 [2024-07-24 19:16:41.336926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.336939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.336952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.336966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.336979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.336992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.337009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.337023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.337036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.337049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.337063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.337076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.337089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.337105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.337119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.337132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.337145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853240 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.338693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.338729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.338746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.338763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.338777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.338791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.338804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.338817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.338830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.338843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.338857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.338870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.338883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.338896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.338910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.338924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.338938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.338957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.338971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.338984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.338998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.527 [2024-07-24 19:16:41.339554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.339567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.339581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853700 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.340840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.340878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.340896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.340910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.340924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.340938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.340951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.340964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.340990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.341730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853be0 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.343170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.343197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.343213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.343227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.343240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.343254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.343267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.343280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.343294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.343308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.343321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.528 [2024-07-24 19:16:41.343334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.343987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.344000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.344013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.344027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.344039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854580 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.529 [2024-07-24 19:16:41.345648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.345998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.346012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.346026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.346040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.346053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.346066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.346079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.346092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.346106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.346119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.346133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854a40 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.530 [2024-07-24 19:16:41.347564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.347991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.348004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.348017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854f20 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.348795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.348825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.348841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.348855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.348869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.348883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.348896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.348909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.348923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.348936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.348949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.348964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.348977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.348990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.531 [2024-07-24 19:16:41.349412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.349425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.349438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.349452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.349465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.349478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.349502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.349516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.349537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.349551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.349568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.349581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.349597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.349610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.349624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.349637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.349650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.349663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.349676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873880 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.350183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.350226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.350245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.350261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.350276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.350290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.350305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.350319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.350334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204f270 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.350400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.350422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.350438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.350452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.350511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.350528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.350544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.350559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.350579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a2890 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.350638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.350659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.350675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.350690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.350705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.350719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.350734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.350748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.350762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204fd90 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.350815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.350836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.350852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.350867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.350882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.350896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.350912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.350926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.350941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0610 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.350989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.351040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.351060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.351074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.351089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.351103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.351118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.351137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.351151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a3490 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.351201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.351221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.351237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.351252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.351267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.351282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.351297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.351311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.351324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6690 is same with the state(5) to be set 00:19:35.532 [2024-07-24 19:16:41.351370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.351390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-07-24 19:16:41.351406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-07-24 19:16:41.351421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.351436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.533 [2024-07-24 19:16:41.351450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.351465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.533 [2024-07-24 19:16:41.351492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.351509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f06990 is same with the state(5) to be set 00:19:35.533 [2024-07-24 19:16:41.351563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.533 [2024-07-24 19:16:41.351583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.351602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.533 [2024-07-24 19:16:41.351616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.351631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.533 [2024-07-24 19:16:41.351650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.351665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.533 [2024-07-24 19:16:41.351679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.351693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f09100 is same with the state(5) to be set 00:19:35.533 [2024-07-24 19:16:41.351740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.533 [2024-07-24 19:16:41.351760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.351776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.533 [2024-07-24 19:16:41.351790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.351805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.533 [2024-07-24 19:16:41.351819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.351834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.533 [2024-07-24 19:16:41.351848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.351862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edaad0 is same with the state(5) to be set 00:19:35.533 [2024-07-24 19:16:41.351908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.533 [2024-07-24 19:16:41.351928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.351944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.533 [2024-07-24 19:16:41.351959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.351974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.533 [2024-07-24 19:16:41.351988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.352004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.533 [2024-07-24 19:16:41.352018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.352032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05100 is same with the state(5) to be set 00:19:35.533 [2024-07-24 19:16:41.352916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-07-24 19:16:41.352946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.352972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-07-24 19:16:41.352989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.353014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-07-24 19:16:41.353031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.353048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-07-24 19:16:41.353064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.353081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-07-24 19:16:41.353096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.353113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-07-24 19:16:41.353128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.353145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-07-24 19:16:41.353161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.353178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-07-24 19:16:41.353192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.353209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-07-24 19:16:41.353224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.353241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-07-24 19:16:41.353256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.353273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-07-24 19:16:41.353287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.353304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-07-24 19:16:41.353318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.353335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-07-24 19:16:41.353350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.353367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-07-24 19:16:41.353382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.353399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-07-24 19:16:41.353418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.353435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-07-24 19:16:41.353450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.353466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-07-24 19:16:41.353489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.353508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-07-24 19:16:41.353524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.353548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-07-24 19:16:41.353563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.353580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-07-24 19:16:41.353598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-07-24 19:16:41.353615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.353630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.353647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.353662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.353679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.353694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.353711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.353725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.353743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.353757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.353775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.353790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.353806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.353821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.353843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.353858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.353875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.353890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.353907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.353922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.353939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.353953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.353970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.353985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-07-24 19:16:41.354820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-07-24 19:16:41.354836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.354851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.354868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.354883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.354901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.354915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.354932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.354947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.354963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.354979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.354995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.355010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.355027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.355041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.355125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:19:35.535 [2024-07-24 19:16:41.355222] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ed51c0 was disconnected and freed. reset controller. 00:19:35.535 [2024-07-24 19:16:41.355818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.355849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.355880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.355896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.355914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.355929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.355946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.355961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.355979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.355994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.356011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.356026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.356043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.356058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.356075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.356090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.356107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.356122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.356140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.356155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.356173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.356188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.356205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.356220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.356246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.356262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.356280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.356295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.356313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.356328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.356345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.356360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.356377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.356392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.356409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.356424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.356442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.356457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.356474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.356497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.356516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.356540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.356557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.356572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.356589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.356611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-07-24 19:16:41.356628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-07-24 19:16:41.356643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.356660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.356679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.356697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.356713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.356731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.356746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.356764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.356778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.356795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.356810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.356828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.356843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.356860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.356875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.356892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.356907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.356925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.356939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.356957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.356972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.356997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.536 [2024-07-24 19:16:41.357858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.536 [2024-07-24 19:16:41.357875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.537 [2024-07-24 19:16:41.357890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.537 [2024-07-24 19:16:41.357908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.537 [2024-07-24 19:16:41.357923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.537 [2024-07-24 19:16:41.357944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.537 [2024-07-24 19:16:41.357959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.537 [2024-07-24 19:16:41.357977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.537 [2024-07-24 19:16:41.357992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.537 [2024-07-24 19:16:41.358008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fab360 is same with the state(5) to be set 00:19:35.537 [2024-07-24 19:16:41.358541] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fab360 was disconnected and freed. reset controller. 00:19:35.537 [2024-07-24 19:16:41.362049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:35.537 [2024-07-24 19:16:41.362149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f06990 (9): Bad file descriptor 00:19:35.537 [2024-07-24 19:16:41.362191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204f270 (9): Bad file descriptor 00:19:35.537 [2024-07-24 19:16:41.362224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a2890 (9): Bad file descriptor 00:19:35.537 [2024-07-24 19:16:41.362257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204fd90 (9): Bad file descriptor 00:19:35.537 [2024-07-24 19:16:41.362292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e0610 (9): Bad file descriptor 00:19:35.537 [2024-07-24 19:16:41.362326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a3490 (9): Bad file descriptor 00:19:35.537 [2024-07-24 19:16:41.362359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6690 (9): Bad file descriptor 00:19:35.537 [2024-07-24 19:16:41.362391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f09100 (9): Bad file descriptor 00:19:35.537 [2024-07-24 19:16:41.362423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edaad0 (9): Bad file descriptor 00:19:35.537 [2024-07-24 19:16:41.362453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f05100 (9): Bad file descriptor 00:19:35.537 [2024-07-24 19:16:41.365304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:35.537 [2024-07-24 19:16:41.365624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.537 [2024-07-24 19:16:41.365664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f06990 with addr=10.0.0.2, port=4420 00:19:35.537 [2024-07-24 19:16:41.365684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f06990 is same with the state(5) to be set 00:19:35.537 [2024-07-24 19:16:41.365763] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:35.537 [2024-07-24 19:16:41.365840] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:35.537 [2024-07-24 19:16:41.365910] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:35.537 [2024-07-24 19:16:41.365977] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:35.537 [2024-07-24 19:16:41.366067] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:35.537 [2024-07-24 19:16:41.366163] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:35.537 [2024-07-24 19:16:41.366266] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:35.537 [2024-07-24 19:16:41.366661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.537 [2024-07-24 19:16:41.366690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.537 [2024-07-24 19:16:41.366739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.537 [2024-07-24 19:16:41.366757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.537 [2024-07-24 19:16:41.366775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.537 [2024-07-24 19:16:41.366791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.537 [2024-07-24 19:16:41.366808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.537 [2024-07-24 19:16:41.366823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.537 [2024-07-24 19:16:41.366841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.537 [2024-07-24 19:16:41.366856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.537 [2024-07-24 19:16:41.366874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.537 [2024-07-24 19:16:41.366889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.537 [2024-07-24 19:16:41.366906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.537 [2024-07-24 19:16:41.366921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.537 [2024-07-24 19:16:41.366938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.537 [2024-07-24 19:16:41.366953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.537 [2024-07-24 19:16:41.366971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.537 [2024-07-24 19:16:41.366986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.537 [2024-07-24 19:16:41.367003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.537 [2024-07-24 19:16:41.367018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.537 [2024-07-24 19:16:41.367035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.537 [2024-07-24 19:16:41.367051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.537 [2024-07-24 19:16:41.367068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.537 [2024-07-24 19:16:41.367083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.537 [2024-07-24 19:16:41.367099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa9eb0 is same with the state(5) to be set 00:19:35.537 [2024-07-24 19:16:41.367178] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fa9eb0 was disconnected and freed. reset controller. 00:19:35.537 [2024-07-24 19:16:41.367381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.537 [2024-07-24 19:16:41.367422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204f270 with addr=10.0.0.2, port=4420 00:19:35.537 [2024-07-24 19:16:41.367445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204f270 is same with the state(5) to be set 00:19:35.537 [2024-07-24 19:16:41.367478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f06990 (9): Bad file descriptor 00:19:35.537 [2024-07-24 19:16:41.368703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:35.537 [2024-07-24 19:16:41.368768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204f270 (9): Bad file descriptor 00:19:35.537 [2024-07-24 19:16:41.368793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:19:35.537 [2024-07-24 19:16:41.368808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:19:35.537 [2024-07-24 19:16:41.368827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:35.537 [2024-07-24 19:16:41.368948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.537 [2024-07-24 19:16:41.369091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.537 [2024-07-24 19:16:41.369120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204fd90 with addr=10.0.0.2, port=4420 00:19:35.537 [2024-07-24 19:16:41.369139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204fd90 is same with the state(5) to be set 00:19:35.537 [2024-07-24 19:16:41.369154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:35.537 [2024-07-24 19:16:41.369168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:35.537 [2024-07-24 19:16:41.369182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:35.537 [2024-07-24 19:16:41.369584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.537 [2024-07-24 19:16:41.369622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204fd90 (9): Bad file descriptor 00:19:35.537 [2024-07-24 19:16:41.369700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:19:35.537 [2024-07-24 19:16:41.369719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:19:35.537 [2024-07-24 19:16:41.369733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:35.537 [2024-07-24 19:16:41.369809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.537 [2024-07-24 19:16:41.372319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.537 [2024-07-24 19:16:41.372382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.537 [2024-07-24 19:16:41.372418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.372435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.372454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.372469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.372494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.372511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.372554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.372570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.372588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.372603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.372620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.372635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.372652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.372667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.372684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.372700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.372717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.372731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.372749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.372764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.372781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.372796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.372813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.372828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.372845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.372860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.372878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.372893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.372910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.372925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.372942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.372961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.372979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.372995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.373012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.373028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.373045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.373060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.373077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.373092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.373109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.373124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.373142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.373156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.373174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.373189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.373206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.373221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.373238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.373253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.373270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.373285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.373303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.373318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.373335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.373350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.373372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.373387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.373405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.373420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.373437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.373452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.373469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.373490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.373508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.373524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.373548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.373563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.373582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.373604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.373621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.373636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.538 [2024-07-24 19:16:41.373653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.538 [2024-07-24 19:16:41.373668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.373685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.373700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.373718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.373732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.373750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.373764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.373782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.373800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.373818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.373833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.373850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.373865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.373882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.373897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.373915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.373931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.373948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.373963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.373980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.373995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.374013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.374027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.374045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.374060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.374077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.374092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.374109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.374124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.374142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.374156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.374173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.374188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.374209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.374225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.374242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.374257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.374274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.374289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.374307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.374321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.374339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.374354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.374372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.374387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.374404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.374420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.374438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.374453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.374470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.374493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.374511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.374527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.374546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb1e80 is same with the state(5) to be set 00:19:35.539 [2024-07-24 19:16:41.376080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.376116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.376146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.376162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.376191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.376207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.376224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.376240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.376257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.539 [2024-07-24 19:16:41.376273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.539 [2024-07-24 19:16:41.376291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.376306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.376323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.376337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.376354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.376369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.376390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.376405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.376422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.376437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.376454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.376469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.376494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.376511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.376537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.376553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.376570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.376585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.376608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.376627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.376645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.376660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.376678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.376692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.376710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.376725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.376742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.376757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.376774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.376789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.376806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.376821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.376839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.376853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.376870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.376885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.376902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.376917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.376934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.376949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.376966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.376982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.377000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.377015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.377036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.377051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.377068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.377083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.377100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.377115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.377133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.377148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.377165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.377180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.377197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.377212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.377229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.377244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.377261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.377276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.377293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.377307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.377325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.377340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.377356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.377372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.377390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.377405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.377422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.377441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.377459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.377473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.540 [2024-07-24 19:16:41.377498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-24 19:16:41.377515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.377532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.377547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.377564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.377579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.377604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.377619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.377636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.377650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.377667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.377682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.377700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.377715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.377732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.377747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.377764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.377779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.377796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.377811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.377828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.377843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.377863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.377879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.377897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.377912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.377929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.377944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.377962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.377977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.377994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.378009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.378026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.378041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.378058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.378073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.378091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.378106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.378123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.378137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.378155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.378170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.378187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.378201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.378219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.378234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.378250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb2eb0 is same with the state(5) to be set 00:19:35.541 [2024-07-24 19:16:41.379763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.379794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.379827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.379843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.379861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.379876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.379893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.379908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.379925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.379940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.379958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.379973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.379991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.380005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.380023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.380037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.380055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.380070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.380087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.380102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.380119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.380134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.380152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.380167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.380184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.380199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.380229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.380245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.541 [2024-07-24 19:16:41.380262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-24 19:16:41.380277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.380294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.380309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.380326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.380341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.380358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.380373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.380391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.380405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.380423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.380437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.380454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.380469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.380498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.380516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.380533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.380548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.380565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.380580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.380597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.380612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.380629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.380649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.380667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.380682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.380699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.380714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.380731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.380745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.380762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.380778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.380795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.380810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.380827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.380842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.380861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.380876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.380893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.380908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.380925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.380940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.380957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.380972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.380989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.381004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.381021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.381037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.381058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.381073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.381090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.381105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.381123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.381139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.381156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.381170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.381188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.381203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.381220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.381235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.381252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.381266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.381284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.381299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.381315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.381330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.381347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.381362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.381379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.381394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.381411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.381426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.381443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.381462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.542 [2024-07-24 19:16:41.381485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-24 19:16:41.381502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.381520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.381535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.381552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.381568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.381585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.381600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.381617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.381632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.381650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.381664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.381681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.381696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.381714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.381729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.381746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.381761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.381778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.381793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.381810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.381825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.381842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.381857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.381878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.381893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.381910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20344e0 is same with the state(5) to be set 00:19:35.543 [2024-07-24 19:16:41.383393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.383426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.383458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.383474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.383500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.383516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.383533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.383549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.383566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.383581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.383599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.383614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.383631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.383647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.383664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.383679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.383696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.383711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.383728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.383743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.383761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.383775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.383803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.383818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.383835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.383850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.383867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.383882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.383899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.383914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.383931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.383946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.383963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.383978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.383995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.384010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.384027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.384042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.384059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.384074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.384092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.384107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.384124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.384139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.384156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.384171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.384188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.543 [2024-07-24 19:16:41.384207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.543 [2024-07-24 19:16:41.384225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.384980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.384995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.385012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.385031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.385048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.385064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.385081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.385096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.385113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.385127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.385144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.385160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.385177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.385193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.385210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.385225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.385242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.385257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.385274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.544 [2024-07-24 19:16:41.385289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.544 [2024-07-24 19:16:41.385305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.385321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.385338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.385353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.385370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.385385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.385402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.385417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.385438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.385454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.385471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.385492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.385510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.385526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.385542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed3e50 is same with the state(5) to be set 00:19:35.545 [2024-07-24 19:16:41.387013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.545 [2024-07-24 19:16:41.387919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.545 [2024-07-24 19:16:41.387936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.387952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.387969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.387984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.388968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.388988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.389003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.389021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.389036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.389053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.389068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.389086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.389101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.546 [2024-07-24 19:16:41.389118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.546 [2024-07-24 19:16:41.389133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.389150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6650 is same with the state(5) to be set 00:19:35.547 [2024-07-24 19:16:41.390643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.390676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.390706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.390722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.390741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.390756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.390773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.390788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.390806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.390821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.390838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.390853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.390871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.390886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.390903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.390935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.390953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.390969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.390986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.547 [2024-07-24 19:16:41.391798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.547 [2024-07-24 19:16:41.391815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.391831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.391848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.391863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.391880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.391895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.391912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.391926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.391944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.391959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.391976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.391991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.392787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.392803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2807f00 is same with the state(5) to be set 00:19:35.548 [2024-07-24 19:16:41.394307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.394340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.394368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.394385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.394402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.394418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.394435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.394450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.394467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.394490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.394509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.548 [2024-07-24 19:16:41.394532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.548 [2024-07-24 19:16:41.394550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.394565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.394583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.394598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.394615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.394630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.394647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.394662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.394679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.394694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.394712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.394727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.394743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.394758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.394776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.394790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.394807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.394822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.394839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.394854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.394871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.394886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.394903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.394918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.394939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.394956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.394974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.394988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.395006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.395021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.395038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.395053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.395070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.395085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.395102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.395117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.395134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.395149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.395166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.395181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.395198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.395213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.395230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.395245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.395262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.395277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.395294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.395309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.395326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.395345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.395362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.395377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.549 [2024-07-24 19:16:41.395395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.549 [2024-07-24 19:16:41.395410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.395427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.395441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.395459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.395474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.395498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.395514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.395531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.395546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.395563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.395579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.395596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.395611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.395628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.395643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.395661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.395676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.395693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.395708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.395725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.395740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.395762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.395777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.395795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.395810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.395827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.395842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.395859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.395874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.395892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.395907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.395924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.395939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.395956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.395971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.395989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.396004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.396022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.396037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.396054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.396069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.396086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.396101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.396118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.396133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.396150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.396168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.396186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.396201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.396218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.396233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.396250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.396265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.396283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.396298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.396315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.396330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.396347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.396362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.396379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.396394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.396411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.550 [2024-07-24 19:16:41.396426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.550 [2024-07-24 19:16:41.396442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29af9d0 is same with the state(5) to be set 00:19:35.550 [2024-07-24 19:16:41.398640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:35.550 [2024-07-24 19:16:41.398697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:35.550 [2024-07-24 19:16:41.398718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:35.550 [2024-07-24 19:16:41.398736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:35.550 [2024-07-24 19:16:41.398892] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.550 [2024-07-24 19:16:41.398920] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.550 [2024-07-24 19:16:41.398943] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.550 [2024-07-24 19:16:41.399063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:19:35.550 [2024-07-24 19:16:41.399088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:35.550 task offset: 23040 on job bdev=Nvme5n1 fails 00:19:35.550 00:19:35.550 Latency(us) 00:19:35.550 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.550 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:35.550 Job: Nvme1n1 ended in about 0.94 seconds with error 00:19:35.550 Verification LBA range: start 0x0 length 0x400 00:19:35.551 Nvme1n1 : 0.94 136.24 8.52 68.12 0.00 309603.49 26408.58 318456.41 00:19:35.551 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:35.551 Job: Nvme2n1 ended in about 0.94 seconds with error 00:19:35.551 Verification LBA range: start 0x0 length 0x400 00:19:35.551 Nvme2n1 : 0.94 135.71 8.48 67.85 0.00 304370.98 20291.89 316902.97 00:19:35.551 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:35.551 Job: Nvme3n1 ended in about 0.95 seconds with error 00:19:35.551 Verification LBA range: start 0x0 length 0x400 00:19:35.551 Nvme3n1 : 0.95 135.19 8.45 67.59 0.00 299136.95 17087.91 290494.39 00:19:35.551 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:35.551 Job: Nvme4n1 ended in about 0.95 seconds with error 00:19:35.551 Verification LBA range: start 0x0 length 0x400 00:19:35.551 Nvme4n1 : 0.95 152.56 9.54 67.34 0.00 270169.52 38447.79 287387.50 00:19:35.551 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:35.551 Job: Nvme5n1 ended in about 0.92 seconds with error 00:19:35.551 Verification LBA range: start 0x0 length 0x400 00:19:35.551 Nvme5n1 : 0.92 138.53 8.66 69.27 0.00 279006.37 6505.05 318456.41 00:19:35.551 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:35.551 Job: Nvme6n1 ended in about 0.95 seconds with error 00:19:35.551 Verification LBA range: start 0x0 length 0x400 00:19:35.551 Nvme6n1 : 0.95 134.16 8.39 67.08 0.00 282553.96 20583.16 298261.62 00:19:35.551 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:35.551 Job: Nvme7n1 ended in about 0.96 seconds with error 00:19:35.551 Verification LBA range: start 0x0 length 0x400 00:19:35.551 Nvme7n1 : 0.96 137.83 8.61 66.83 0.00 271751.24 22622.06 273406.48 00:19:35.551 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:35.551 Job: Nvme8n1 ended in about 0.96 seconds with error 00:19:35.551 Verification LBA range: start 0x0 length 0x400 00:19:35.551 Nvme8n1 : 0.96 133.15 8.32 66.57 0.00 272192.85 39224.51 326223.64 00:19:35.551 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:35.551 Job: Nvme9n1 ended in about 0.93 seconds with error 00:19:35.551 Verification LBA range: start 0x0 length 0x400 00:19:35.551 Nvme9n1 : 0.93 134.08 8.38 12.87 0.00 358096.22 24855.13 347971.89 00:19:35.551 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:35.551 Job: Nvme10n1 ended in about 0.93 seconds with error 00:19:35.551 Verification LBA range: start 0x0 length 0x400 00:19:35.551 Nvme10n1 : 0.93 138.28 8.64 69.14 0.00 247894.47 11116.85 324670.20 00:19:35.551 =================================================================================================================== 00:19:35.551 Total : 1375.74 85.98 622.67 0.00 287264.90 6505.05 347971.89 00:19:35.551 [2024-07-24 19:16:41.426620] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:35.551 [2024-07-24 19:16:41.426729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:35.551 [2024-07-24 19:16:41.427087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.551 [2024-07-24 19:16:41.427128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edaad0 with addr=10.0.0.2, port=4420 00:19:35.551 [2024-07-24 19:16:41.427150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edaad0 is same with the state(5) to be set 00:19:35.551 [2024-07-24 19:16:41.427283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.551 [2024-07-24 19:16:41.427311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a6690 with addr=10.0.0.2, port=4420 00:19:35.551 [2024-07-24 19:16:41.427327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6690 is same with the state(5) to be set 00:19:35.551 [2024-07-24 19:16:41.427452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.551 [2024-07-24 19:16:41.427491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f05100 with addr=10.0.0.2, port=4420 00:19:35.551 [2024-07-24 19:16:41.427510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05100 is same with the state(5) to be set 00:19:35.551 [2024-07-24 19:16:41.427649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.551 [2024-07-24 19:16:41.427675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f09100 with addr=10.0.0.2, port=4420 00:19:35.551 [2024-07-24 19:16:41.427692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f09100 is same with the state(5) to be set 00:19:35.551 [2024-07-24 19:16:41.429928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:35.551 [2024-07-24 19:16:41.430009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:35.551 [2024-07-24 19:16:41.430322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.551 [2024-07-24 19:16:41.430365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e0610 with addr=10.0.0.2, port=4420 00:19:35.551 [2024-07-24 19:16:41.430386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e0610 is same with the state(5) to be set 00:19:35.551 [2024-07-24 19:16:41.430521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.551 [2024-07-24 19:16:41.430549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a3490 with addr=10.0.0.2, port=4420 00:19:35.551 [2024-07-24 19:16:41.430566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a3490 is same with the state(5) to be set 00:19:35.551 [2024-07-24 19:16:41.430733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.551 [2024-07-24 19:16:41.430759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a2890 with addr=10.0.0.2, port=4420 00:19:35.551 [2024-07-24 19:16:41.430775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a2890 is same with the state(5) to be set 00:19:35.551 [2024-07-24 19:16:41.430806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edaad0 (9): Bad file descriptor 00:19:35.551 [2024-07-24 19:16:41.430831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6690 (9): Bad file descriptor 00:19:35.551 [2024-07-24 19:16:41.430851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f05100 (9): Bad file descriptor 00:19:35.551 [2024-07-24 19:16:41.430869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f09100 (9): Bad file descriptor 00:19:35.551 [2024-07-24 19:16:41.430927] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.551 [2024-07-24 19:16:41.430960] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.551 [2024-07-24 19:16:41.430980] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.551 [2024-07-24 19:16:41.431004] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.551 [2024-07-24 19:16:41.431030] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.551 [2024-07-24 19:16:41.431134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:35.551 [2024-07-24 19:16:41.431348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.551 [2024-07-24 19:16:41.431377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f06990 with addr=10.0.0.2, port=4420 00:19:35.551 [2024-07-24 19:16:41.431393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f06990 is same with the state(5) to be set 00:19:35.551 [2024-07-24 19:16:41.431491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.551 [2024-07-24 19:16:41.431517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204f270 with addr=10.0.0.2, port=4420 00:19:35.551 [2024-07-24 19:16:41.431534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204f270 is same with the state(5) to be set 00:19:35.551 [2024-07-24 19:16:41.431554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e0610 (9): Bad file descriptor 00:19:35.551 [2024-07-24 19:16:41.431574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a3490 (9): Bad file descriptor 00:19:35.551 [2024-07-24 19:16:41.431593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a2890 (9): Bad file descriptor 00:19:35.551 [2024-07-24 19:16:41.431611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:35.551 [2024-07-24 19:16:41.431625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:35.551 [2024-07-24 19:16:41.431643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:35.551 [2024-07-24 19:16:41.431665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:35.551 [2024-07-24 19:16:41.431680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:19:35.551 [2024-07-24 19:16:41.431693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:35.551 [2024-07-24 19:16:41.431711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:35.551 [2024-07-24 19:16:41.431725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:35.551 [2024-07-24 19:16:41.431738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:35.551 [2024-07-24 19:16:41.431758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:19:35.551 [2024-07-24 19:16:41.431772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:19:35.551 [2024-07-24 19:16:41.431786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:35.551 [2024-07-24 19:16:41.431910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.551 [2024-07-24 19:16:41.431931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.551 [2024-07-24 19:16:41.431944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.551 [2024-07-24 19:16:41.431957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.551 [2024-07-24 19:16:41.432096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.552 [2024-07-24 19:16:41.432122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204fd90 with addr=10.0.0.2, port=4420 00:19:35.552 [2024-07-24 19:16:41.432138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204fd90 is same with the state(5) to be set 00:19:35.552 [2024-07-24 19:16:41.432158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f06990 (9): Bad file descriptor 00:19:35.552 [2024-07-24 19:16:41.432178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204f270 (9): Bad file descriptor 00:19:35.552 [2024-07-24 19:16:41.432200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:19:35.552 [2024-07-24 19:16:41.432214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:19:35.552 [2024-07-24 19:16:41.432228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:35.552 [2024-07-24 19:16:41.432247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:19:35.552 [2024-07-24 19:16:41.432262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:19:35.552 [2024-07-24 19:16:41.432275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:35.552 [2024-07-24 19:16:41.432292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:19:35.552 [2024-07-24 19:16:41.432305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:19:35.552 [2024-07-24 19:16:41.432319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:35.552 [2024-07-24 19:16:41.432365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.552 [2024-07-24 19:16:41.432383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.552 [2024-07-24 19:16:41.432396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.552 [2024-07-24 19:16:41.432412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204fd90 (9): Bad file descriptor 00:19:35.552 [2024-07-24 19:16:41.432430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:19:35.552 [2024-07-24 19:16:41.432444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:19:35.552 [2024-07-24 19:16:41.432458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:35.552 [2024-07-24 19:16:41.432475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:35.552 [2024-07-24 19:16:41.432497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:35.552 [2024-07-24 19:16:41.432512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:35.552 [2024-07-24 19:16:41.432557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.552 [2024-07-24 19:16:41.432575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.552 [2024-07-24 19:16:41.432588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:19:35.552 [2024-07-24 19:16:41.432601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:19:35.552 [2024-07-24 19:16:41.432615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:35.552 [2024-07-24 19:16:41.432654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.812 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:19:35.812 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:19:37.192 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2596704 00:19:37.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2596704) - No such process 00:19:37.192 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:19:37.192 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:19:37.192 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:37.192 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:37.192 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:37.192 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:37.192 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:37.192 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:19:37.192 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:37.192 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:19:37.192 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:37.192 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:37.192 rmmod nvme_tcp 00:19:37.192 rmmod nvme_fabrics 00:19:37.192 rmmod nvme_keyring 00:19:37.192 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:37.192 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:19:37.192 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:19:37.192 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:37.193 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:37.193 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:37.193 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:37.193 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:37.193 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:37.193 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.193 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:37.193 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.130 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:39.130 00:19:39.130 real 0m7.173s 00:19:39.130 user 0m16.924s 00:19:39.130 sys 0m1.326s 00:19:39.130 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:39.130 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:39.130 ************************************ 00:19:39.130 END TEST nvmf_shutdown_tc3 00:19:39.130 ************************************ 00:19:39.130 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:19:39.130 00:19:39.130 real 0m26.583s 00:19:39.130 user 1m14.578s 00:19:39.130 sys 0m5.874s 00:19:39.130 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:39.130 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:39.130 ************************************ 00:19:39.130 END TEST nvmf_shutdown 00:19:39.130 ************************************ 00:19:39.130 19:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:19:39.130 00:19:39.130 real 10m46.706s 00:19:39.130 user 25m56.942s 00:19:39.130 sys 2m22.820s 00:19:39.130 19:16:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:39.130 19:16:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:39.130 ************************************ 00:19:39.130 END TEST nvmf_target_extra 00:19:39.130 ************************************ 00:19:39.130 19:16:44 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:39.130 19:16:44 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:39.130 19:16:44 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:39.130 19:16:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:39.130 ************************************ 00:19:39.130 START TEST nvmf_host 00:19:39.130 ************************************ 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:39.130 * Looking for test storage... 00:19:39.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:39.130 19:16:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:19:39.131 19:16:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:19:39.131 19:16:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:39.131 19:16:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:39.131 19:16:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:39.131 19:16:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.131 ************************************ 00:19:39.131 START TEST nvmf_multicontroller 00:19:39.131 ************************************ 00:19:39.131 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:39.413 * Looking for test storage... 00:19:39.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:39.413 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.413 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:19:39.413 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.413 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.413 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.413 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.413 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:19:39.414 19:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:40.795 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:40.795 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.796 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.796 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.796 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.796 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.796 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.796 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:40.796 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:40.796 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.796 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.796 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.796 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.796 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.796 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:40.796 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:40.796 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:40.796 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.796 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.796 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:41.055 Found net devices under 0000:08:00.0: cvl_0_0 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:41.055 Found net devices under 0000:08:00.1: cvl_0_1 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:41.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:41.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:19:41.055 00:19:41.055 --- 10.0.0.2 ping statistics --- 00:19:41.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.055 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:41.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:41.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:19:41.055 00:19:41.055 --- 10.0.0.1 ping statistics --- 00:19:41.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.055 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2598604 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2598604 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2598604 ']' 00:19:41.055 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.056 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:41.056 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.056 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:41.056 19:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.056 [2024-07-24 19:16:47.005828] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:19:41.056 [2024-07-24 19:16:47.005925] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.056 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.316 [2024-07-24 19:16:47.070912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:41.316 [2024-07-24 19:16:47.187199] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.316 [2024-07-24 19:16:47.187264] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.316 [2024-07-24 19:16:47.187279] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.316 [2024-07-24 19:16:47.187301] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.316 [2024-07-24 19:16:47.187314] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.316 [2024-07-24 19:16:47.187398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.316 [2024-07-24 19:16:47.187448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:41.316 [2024-07-24 19:16:47.187451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.316 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:41.316 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:19:41.316 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:41.316 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:41.316 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.316 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.316 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:41.316 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.316 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.316 [2024-07-24 19:16:47.325181] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.575 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.575 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:41.575 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.575 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.575 Malloc0 00:19:41.575 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.575 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:41.575 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.575 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.575 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.575 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:41.575 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.575 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.575 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.575 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:41.575 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.575 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.575 [2024-07-24 19:16:47.395778] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.575 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.576 [2024-07-24 19:16:47.403646] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.576 Malloc1 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2598716 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2598716 /var/tmp/bdevperf.sock 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2598716 ']' 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:41.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:41.576 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.835 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:41.835 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:19:41.835 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:41.835 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.835 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:42.096 NVMe0n1 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.096 1 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:42.096 request: 00:19:42.096 { 00:19:42.096 "name": "NVMe0", 00:19:42.096 "trtype": "tcp", 00:19:42.096 "traddr": "10.0.0.2", 00:19:42.096 "adrfam": "ipv4", 00:19:42.096 "trsvcid": "4420", 00:19:42.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.096 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:42.096 "hostaddr": "10.0.0.2", 00:19:42.096 "hostsvcid": "60000", 00:19:42.096 "prchk_reftag": false, 00:19:42.096 "prchk_guard": false, 00:19:42.096 "hdgst": false, 00:19:42.096 "ddgst": false, 00:19:42.096 "method": "bdev_nvme_attach_controller", 00:19:42.096 "req_id": 1 00:19:42.096 } 00:19:42.096 Got JSON-RPC error response 00:19:42.096 response: 00:19:42.096 { 00:19:42.096 "code": -114, 00:19:42.096 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:42.096 } 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:42.096 request: 00:19:42.096 { 00:19:42.096 "name": "NVMe0", 00:19:42.096 "trtype": "tcp", 00:19:42.096 "traddr": "10.0.0.2", 00:19:42.096 "adrfam": "ipv4", 00:19:42.096 "trsvcid": "4420", 00:19:42.096 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:42.096 "hostaddr": "10.0.0.2", 00:19:42.096 "hostsvcid": "60000", 00:19:42.096 "prchk_reftag": false, 00:19:42.096 "prchk_guard": false, 00:19:42.096 "hdgst": false, 00:19:42.096 "ddgst": false, 00:19:42.096 "method": "bdev_nvme_attach_controller", 00:19:42.096 "req_id": 1 00:19:42.096 } 00:19:42.096 Got JSON-RPC error response 00:19:42.096 response: 00:19:42.096 { 00:19:42.096 "code": -114, 00:19:42.096 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:42.096 } 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:42.096 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:42.097 request: 00:19:42.097 { 00:19:42.097 "name": "NVMe0", 00:19:42.097 "trtype": "tcp", 00:19:42.097 "traddr": "10.0.0.2", 00:19:42.097 "adrfam": "ipv4", 00:19:42.097 "trsvcid": "4420", 00:19:42.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.097 "hostaddr": "10.0.0.2", 00:19:42.097 "hostsvcid": "60000", 00:19:42.097 "prchk_reftag": false, 00:19:42.097 "prchk_guard": false, 00:19:42.097 "hdgst": false, 00:19:42.097 "ddgst": false, 00:19:42.097 "multipath": "disable", 00:19:42.097 "method": "bdev_nvme_attach_controller", 00:19:42.097 "req_id": 1 00:19:42.097 } 00:19:42.097 Got JSON-RPC error response 00:19:42.097 response: 00:19:42.097 { 00:19:42.097 "code": -114, 00:19:42.097 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:19:42.097 } 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:42.097 request: 00:19:42.097 { 00:19:42.097 "name": "NVMe0", 00:19:42.097 "trtype": "tcp", 00:19:42.097 "traddr": "10.0.0.2", 00:19:42.097 "adrfam": "ipv4", 00:19:42.097 "trsvcid": "4420", 00:19:42.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.097 "hostaddr": "10.0.0.2", 00:19:42.097 "hostsvcid": "60000", 00:19:42.097 "prchk_reftag": false, 00:19:42.097 "prchk_guard": false, 00:19:42.097 "hdgst": false, 00:19:42.097 "ddgst": false, 00:19:42.097 "multipath": "failover", 00:19:42.097 "method": "bdev_nvme_attach_controller", 00:19:42.097 "req_id": 1 00:19:42.097 } 00:19:42.097 Got JSON-RPC error response 00:19:42.097 response: 00:19:42.097 { 00:19:42.097 "code": -114, 00:19:42.097 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:42.097 } 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.097 19:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:42.356 00:19:42.356 19:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.356 19:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:42.356 19:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.356 19:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:42.356 19:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.356 19:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:42.356 19:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.356 19:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:42.356 00:19:42.356 19:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.356 19:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:42.357 19:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:42.357 19:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.357 19:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:42.357 19:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.357 19:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:42.357 19:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:43.734 0 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2598716 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2598716 ']' 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2598716 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2598716 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2598716' 00:19:43.734 killing process with pid 2598716 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2598716 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2598716 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:19:43.734 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:19:43.734 [2024-07-24 19:16:47.509192] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:19:43.734 [2024-07-24 19:16:47.509298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2598716 ] 00:19:43.734 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.734 [2024-07-24 19:16:47.570844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.734 [2024-07-24 19:16:47.687641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.734 [2024-07-24 19:16:48.232205] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 698ec42a-2aeb-4030-96e1-79e9f39671a7 already exists 00:19:43.734 [2024-07-24 19:16:48.232249] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:698ec42a-2aeb-4030-96e1-79e9f39671a7 alias for bdev NVMe1n1 00:19:43.734 [2024-07-24 19:16:48.232266] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:43.734 Running I/O for 1 seconds... 00:19:43.734 00:19:43.734 Latency(us) 00:19:43.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.734 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:43.734 NVMe0n1 : 1.01 16800.51 65.63 0.00 0.00 7605.09 6456.51 14466.47 00:19:43.734 =================================================================================================================== 00:19:43.734 Total : 16800.51 65.63 0.00 0.00 7605.09 6456.51 14466.47 00:19:43.734 Received shutdown signal, test time was about 1.000000 seconds 00:19:43.734 00:19:43.734 Latency(us) 00:19:43.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.734 =================================================================================================================== 00:19:43.734 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:43.734 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:43.734 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:43.734 rmmod nvme_tcp 00:19:43.734 rmmod nvme_fabrics 00:19:43.734 rmmod nvme_keyring 00:19:43.735 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:43.735 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:19:43.735 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:19:43.735 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2598604 ']' 00:19:43.735 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2598604 00:19:43.735 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2598604 ']' 00:19:43.735 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2598604 00:19:43.996 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:19:43.996 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:43.996 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2598604 00:19:43.996 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:43.996 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:43.996 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2598604' 00:19:43.996 killing process with pid 2598604 00:19:43.996 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2598604 00:19:43.996 19:16:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2598604 00:19:44.253 19:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:44.253 19:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:44.253 19:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:44.253 19:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:44.253 19:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:44.253 19:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.253 19:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:44.253 19:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.163 19:16:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:46.163 00:19:46.163 real 0m6.951s 00:19:46.163 user 0m11.349s 00:19:46.163 sys 0m1.939s 00:19:46.163 19:16:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:46.163 19:16:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:46.163 ************************************ 00:19:46.163 END TEST nvmf_multicontroller 00:19:46.163 ************************************ 00:19:46.163 19:16:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:46.163 19:16:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:46.163 19:16:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:46.163 19:16:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.163 ************************************ 00:19:46.163 START TEST nvmf_aer 00:19:46.163 ************************************ 00:19:46.163 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:46.163 * Looking for test storage... 00:19:46.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:46.422 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:46.422 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:19:46.422 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:46.422 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:46.422 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:46.422 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:46.422 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:46.422 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:46.422 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:46.422 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:46.422 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:46.422 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:46.422 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:46.422 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:46.422 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:46.422 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:46.422 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:46.422 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:46.422 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:46.422 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:46.422 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:46.422 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:19:46.423 19:16:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:47.801 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:47.801 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:47.801 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:47.802 Found net devices under 0000:08:00.0: cvl_0_0 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:47.802 Found net devices under 0000:08:00.1: cvl_0_1 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:47.802 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:48.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:19:48.061 00:19:48.061 --- 10.0.0.2 ping statistics --- 00:19:48.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.061 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:48.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:19:48.061 00:19:48.061 --- 10.0.0.1 ping statistics --- 00:19:48.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.061 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2600422 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2600422 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 2600422 ']' 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:48.061 19:16:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:48.061 [2024-07-24 19:16:53.944467] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:19:48.061 [2024-07-24 19:16:53.944577] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.061 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.061 [2024-07-24 19:16:54.010132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:48.321 [2024-07-24 19:16:54.127953] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.321 [2024-07-24 19:16:54.128008] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.321 [2024-07-24 19:16:54.128024] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.321 [2024-07-24 19:16:54.128037] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.321 [2024-07-24 19:16:54.128050] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.321 [2024-07-24 19:16:54.128151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.321 [2024-07-24 19:16:54.128217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.321 [2024-07-24 19:16:54.128266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:48.321 [2024-07-24 19:16:54.128270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:48.321 [2024-07-24 19:16:54.273809] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:48.321 Malloc0 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:48.321 [2024-07-24 19:16:54.323607] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.321 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:48.581 [ 00:19:48.581 { 00:19:48.581 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:48.581 "subtype": "Discovery", 00:19:48.581 "listen_addresses": [], 00:19:48.581 "allow_any_host": true, 00:19:48.581 "hosts": [] 00:19:48.581 }, 00:19:48.581 { 00:19:48.581 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.581 "subtype": "NVMe", 00:19:48.581 "listen_addresses": [ 00:19:48.581 { 00:19:48.581 "trtype": "TCP", 00:19:48.581 "adrfam": "IPv4", 00:19:48.581 "traddr": "10.0.0.2", 00:19:48.581 "trsvcid": "4420" 00:19:48.581 } 00:19:48.581 ], 00:19:48.581 "allow_any_host": true, 00:19:48.581 "hosts": [], 00:19:48.581 "serial_number": "SPDK00000000000001", 00:19:48.581 "model_number": "SPDK bdev Controller", 00:19:48.581 "max_namespaces": 2, 00:19:48.581 "min_cntlid": 1, 00:19:48.581 "max_cntlid": 65519, 00:19:48.581 "namespaces": [ 00:19:48.581 { 00:19:48.581 "nsid": 1, 00:19:48.581 "bdev_name": "Malloc0", 00:19:48.581 "name": "Malloc0", 00:19:48.581 "nguid": "D4947A79DD7243A389E20AAD75AE2C94", 00:19:48.581 "uuid": "d4947a79-dd72-43a3-89e2-0aad75ae2c94" 00:19:48.581 } 00:19:48.581 ] 00:19:48.581 } 00:19:48.581 ] 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2600454 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:48.581 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:48.581 Malloc1 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.581 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:48.840 [ 00:19:48.840 { 00:19:48.840 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:48.840 "subtype": "Discovery", 00:19:48.840 "listen_addresses": [], 00:19:48.840 "allow_any_host": true, 00:19:48.840 "hosts": [] 00:19:48.840 }, 00:19:48.840 { 00:19:48.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.840 "subtype": "NVMe", 00:19:48.840 "listen_addresses": [ 00:19:48.840 { 00:19:48.840 "trtype": "TCP", 00:19:48.840 "adrfam": "IPv4", 00:19:48.840 "traddr": "10.0.0.2", 00:19:48.840 "trsvcid": "4420" 00:19:48.840 } 00:19:48.840 ], 00:19:48.840 "allow_any_host": true, 00:19:48.840 "hosts": [], 00:19:48.840 "serial_number": "SPDK00000000000001", 00:19:48.840 "model_number": "SPDK bdev Controller", 00:19:48.840 "max_namespaces": 2, 00:19:48.840 "min_cntlid": 1, 00:19:48.840 "max_cntlid": 65519, 00:19:48.840 "namespaces": [ 00:19:48.840 { 00:19:48.840 "nsid": 1, 00:19:48.840 "bdev_name": "Malloc0", 00:19:48.840 "name": "Malloc0", 00:19:48.840 "nguid": "D4947A79DD7243A389E20AAD75AE2C94", 00:19:48.840 "uuid": "d4947a79-dd72-43a3-89e2-0aad75ae2c94" 00:19:48.840 }, 00:19:48.840 { 00:19:48.840 "nsid": 2, 00:19:48.840 "bdev_name": "Malloc1", 00:19:48.840 "name": "Malloc1", 00:19:48.840 "nguid": "CB4939253034417A870735E8F59148A4", 00:19:48.840 "uuid": "cb493925-3034-417a-8707-35e8f59148a4" 00:19:48.840 } 00:19:48.840 ] 00:19:48.840 } 00:19:48.840 ] 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2600454 00:19:48.840 Asynchronous Event Request test 00:19:48.840 Attaching to 10.0.0.2 00:19:48.840 Attached to 10.0.0.2 00:19:48.840 Registering asynchronous event callbacks... 00:19:48.840 Starting namespace attribute notice tests for all controllers... 00:19:48.840 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:48.840 aer_cb - Changed Namespace 00:19:48.840 Cleaning up... 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:48.840 rmmod nvme_tcp 00:19:48.840 rmmod nvme_fabrics 00:19:48.840 rmmod nvme_keyring 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2600422 ']' 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2600422 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 2600422 ']' 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 2600422 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2600422 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2600422' 00:19:48.840 killing process with pid 2600422 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 2600422 00:19:48.840 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 2600422 00:19:49.100 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:49.101 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:49.101 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:49.101 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:49.101 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:49.101 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.101 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:49.101 19:16:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.008 19:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:51.008 00:19:51.008 real 0m4.892s 00:19:51.008 user 0m3.864s 00:19:51.008 sys 0m1.573s 00:19:51.008 19:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:51.008 19:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:51.008 ************************************ 00:19:51.008 END TEST nvmf_aer 00:19:51.008 ************************************ 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.268 ************************************ 00:19:51.268 START TEST nvmf_async_init 00:19:51.268 ************************************ 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:51.268 * Looking for test storage... 00:19:51.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=3984d932fe2f4480b96bc276c7fea34f 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:51.268 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.269 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:51.269 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:51.269 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:51.269 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.269 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:51.269 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.269 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:51.269 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:51.269 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:19:51.269 19:16:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.176 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:53.176 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:19:53.176 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:53.176 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:53.176 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:53.176 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:53.176 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:53.176 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:19:53.176 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:53.176 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:19:53.176 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:19:53.176 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:19:53.176 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:19:53.176 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:19:53.176 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:19:53.176 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:53.176 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:53.176 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:53.176 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:53.176 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:53.176 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:53.176 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:53.177 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:53.177 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:53.177 Found net devices under 0000:08:00.0: cvl_0_0 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:53.177 Found net devices under 0000:08:00.1: cvl_0_1 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:53.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:53.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:19:53.177 00:19:53.177 --- 10.0.0.2 ping statistics --- 00:19:53.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.177 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:53.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:53.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:19:53.177 00:19:53.177 --- 10.0.0.1 ping statistics --- 00:19:53.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.177 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2601949 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2601949 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 2601949 ']' 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:53.177 19:16:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.177 [2024-07-24 19:16:58.917789] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:19:53.178 [2024-07-24 19:16:58.917884] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.178 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.178 [2024-07-24 19:16:58.984803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.178 [2024-07-24 19:16:59.103511] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.178 [2024-07-24 19:16:59.103588] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.178 [2024-07-24 19:16:59.103604] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.178 [2024-07-24 19:16:59.103617] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.178 [2024-07-24 19:16:59.103629] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.178 [2024-07-24 19:16:59.103660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.439 [2024-07-24 19:16:59.243036] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.439 null0 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3984d932fe2f4480b96bc276c7fea34f 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.439 [2024-07-24 19:16:59.283280] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.439 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.699 nvme0n1 00:19:53.699 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.699 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:53.699 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.699 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.699 [ 00:19:53.699 { 00:19:53.699 "name": "nvme0n1", 00:19:53.699 "aliases": [ 00:19:53.699 "3984d932-fe2f-4480-b96b-c276c7fea34f" 00:19:53.699 ], 00:19:53.699 "product_name": "NVMe disk", 00:19:53.699 "block_size": 512, 00:19:53.699 "num_blocks": 2097152, 00:19:53.699 "uuid": "3984d932-fe2f-4480-b96b-c276c7fea34f", 00:19:53.699 "assigned_rate_limits": { 00:19:53.699 "rw_ios_per_sec": 0, 00:19:53.699 "rw_mbytes_per_sec": 0, 00:19:53.699 "r_mbytes_per_sec": 0, 00:19:53.699 "w_mbytes_per_sec": 0 00:19:53.699 }, 00:19:53.699 "claimed": false, 00:19:53.699 "zoned": false, 00:19:53.700 "supported_io_types": { 00:19:53.700 "read": true, 00:19:53.700 "write": true, 00:19:53.700 "unmap": false, 00:19:53.700 "flush": true, 00:19:53.700 "reset": true, 00:19:53.700 "nvme_admin": true, 00:19:53.700 "nvme_io": true, 00:19:53.700 "nvme_io_md": false, 00:19:53.700 "write_zeroes": true, 00:19:53.700 "zcopy": false, 00:19:53.700 "get_zone_info": false, 00:19:53.700 "zone_management": false, 00:19:53.700 "zone_append": false, 00:19:53.700 "compare": true, 00:19:53.700 "compare_and_write": true, 00:19:53.700 "abort": true, 00:19:53.700 "seek_hole": false, 00:19:53.700 "seek_data": false, 00:19:53.700 "copy": true, 00:19:53.700 "nvme_iov_md": false 00:19:53.700 }, 00:19:53.700 "memory_domains": [ 00:19:53.700 { 00:19:53.700 "dma_device_id": "system", 00:19:53.700 "dma_device_type": 1 00:19:53.700 } 00:19:53.700 ], 00:19:53.700 "driver_specific": { 00:19:53.700 "nvme": [ 00:19:53.700 { 00:19:53.700 "trid": { 00:19:53.700 "trtype": "TCP", 00:19:53.700 "adrfam": "IPv4", 00:19:53.700 "traddr": "10.0.0.2", 00:19:53.700 "trsvcid": "4420", 00:19:53.700 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:53.700 }, 00:19:53.700 "ctrlr_data": { 00:19:53.700 "cntlid": 1, 00:19:53.700 "vendor_id": "0x8086", 00:19:53.700 "model_number": "SPDK bdev Controller", 00:19:53.700 "serial_number": "00000000000000000000", 00:19:53.700 "firmware_revision": "24.09", 00:19:53.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:53.700 "oacs": { 00:19:53.700 "security": 0, 00:19:53.700 "format": 0, 00:19:53.700 "firmware": 0, 00:19:53.700 "ns_manage": 0 00:19:53.700 }, 00:19:53.700 "multi_ctrlr": true, 00:19:53.700 "ana_reporting": false 00:19:53.700 }, 00:19:53.700 "vs": { 00:19:53.700 "nvme_version": "1.3" 00:19:53.700 }, 00:19:53.700 "ns_data": { 00:19:53.700 "id": 1, 00:19:53.700 "can_share": true 00:19:53.700 } 00:19:53.700 } 00:19:53.700 ], 00:19:53.700 "mp_policy": "active_passive" 00:19:53.700 } 00:19:53.700 } 00:19:53.700 ] 00:19:53.700 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.700 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:53.700 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.700 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.700 [2024-07-24 19:16:59.536763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:53.700 [2024-07-24 19:16:59.536863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2e3d0 (9): Bad file descriptor 00:19:53.700 [2024-07-24 19:16:59.709675] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.960 [ 00:19:53.960 { 00:19:53.960 "name": "nvme0n1", 00:19:53.960 "aliases": [ 00:19:53.960 "3984d932-fe2f-4480-b96b-c276c7fea34f" 00:19:53.960 ], 00:19:53.960 "product_name": "NVMe disk", 00:19:53.960 "block_size": 512, 00:19:53.960 "num_blocks": 2097152, 00:19:53.960 "uuid": "3984d932-fe2f-4480-b96b-c276c7fea34f", 00:19:53.960 "assigned_rate_limits": { 00:19:53.960 "rw_ios_per_sec": 0, 00:19:53.960 "rw_mbytes_per_sec": 0, 00:19:53.960 "r_mbytes_per_sec": 0, 00:19:53.960 "w_mbytes_per_sec": 0 00:19:53.960 }, 00:19:53.960 "claimed": false, 00:19:53.960 "zoned": false, 00:19:53.960 "supported_io_types": { 00:19:53.960 "read": true, 00:19:53.960 "write": true, 00:19:53.960 "unmap": false, 00:19:53.960 "flush": true, 00:19:53.960 "reset": true, 00:19:53.960 "nvme_admin": true, 00:19:53.960 "nvme_io": true, 00:19:53.960 "nvme_io_md": false, 00:19:53.960 "write_zeroes": true, 00:19:53.960 "zcopy": false, 00:19:53.960 "get_zone_info": false, 00:19:53.960 "zone_management": false, 00:19:53.960 "zone_append": false, 00:19:53.960 "compare": true, 00:19:53.960 "compare_and_write": true, 00:19:53.960 "abort": true, 00:19:53.960 "seek_hole": false, 00:19:53.960 "seek_data": false, 00:19:53.960 "copy": true, 00:19:53.960 "nvme_iov_md": false 00:19:53.960 }, 00:19:53.960 "memory_domains": [ 00:19:53.960 { 00:19:53.960 "dma_device_id": "system", 00:19:53.960 "dma_device_type": 1 00:19:53.960 } 00:19:53.960 ], 00:19:53.960 "driver_specific": { 00:19:53.960 "nvme": [ 00:19:53.960 { 00:19:53.960 "trid": { 00:19:53.960 "trtype": "TCP", 00:19:53.960 "adrfam": "IPv4", 00:19:53.960 "traddr": "10.0.0.2", 00:19:53.960 "trsvcid": "4420", 00:19:53.960 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:53.960 }, 00:19:53.960 "ctrlr_data": { 00:19:53.960 "cntlid": 2, 00:19:53.960 "vendor_id": "0x8086", 00:19:53.960 "model_number": "SPDK bdev Controller", 00:19:53.960 "serial_number": "00000000000000000000", 00:19:53.960 "firmware_revision": "24.09", 00:19:53.960 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:53.960 "oacs": { 00:19:53.960 "security": 0, 00:19:53.960 "format": 0, 00:19:53.960 "firmware": 0, 00:19:53.960 "ns_manage": 0 00:19:53.960 }, 00:19:53.960 "multi_ctrlr": true, 00:19:53.960 "ana_reporting": false 00:19:53.960 }, 00:19:53.960 "vs": { 00:19:53.960 "nvme_version": "1.3" 00:19:53.960 }, 00:19:53.960 "ns_data": { 00:19:53.960 "id": 1, 00:19:53.960 "can_share": true 00:19:53.960 } 00:19:53.960 } 00:19:53.960 ], 00:19:53.960 "mp_policy": "active_passive" 00:19:53.960 } 00:19:53.960 } 00:19:53.960 ] 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Uh6rtJ2v0p 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Uh6rtJ2v0p 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.960 [2024-07-24 19:16:59.769646] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:53.960 [2024-07-24 19:16:59.769808] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Uh6rtJ2v0p 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.960 [2024-07-24 19:16:59.777647] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Uh6rtJ2v0p 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.960 [2024-07-24 19:16:59.785673] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:53.960 [2024-07-24 19:16:59.785740] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:53.960 nvme0n1 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.960 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.960 [ 00:19:53.960 { 00:19:53.960 "name": "nvme0n1", 00:19:53.960 "aliases": [ 00:19:53.960 "3984d932-fe2f-4480-b96b-c276c7fea34f" 00:19:53.960 ], 00:19:53.960 "product_name": "NVMe disk", 00:19:53.960 "block_size": 512, 00:19:53.960 "num_blocks": 2097152, 00:19:53.960 "uuid": "3984d932-fe2f-4480-b96b-c276c7fea34f", 00:19:53.960 "assigned_rate_limits": { 00:19:53.960 "rw_ios_per_sec": 0, 00:19:53.960 "rw_mbytes_per_sec": 0, 00:19:53.960 "r_mbytes_per_sec": 0, 00:19:53.960 "w_mbytes_per_sec": 0 00:19:53.960 }, 00:19:53.960 "claimed": false, 00:19:53.960 "zoned": false, 00:19:53.960 "supported_io_types": { 00:19:53.960 "read": true, 00:19:53.960 "write": true, 00:19:53.960 "unmap": false, 00:19:53.960 "flush": true, 00:19:53.960 "reset": true, 00:19:53.960 "nvme_admin": true, 00:19:53.960 "nvme_io": true, 00:19:53.960 "nvme_io_md": false, 00:19:53.960 "write_zeroes": true, 00:19:53.960 "zcopy": false, 00:19:53.960 "get_zone_info": false, 00:19:53.960 "zone_management": false, 00:19:53.960 "zone_append": false, 00:19:53.960 "compare": true, 00:19:53.960 "compare_and_write": true, 00:19:53.960 "abort": true, 00:19:53.960 "seek_hole": false, 00:19:53.960 "seek_data": false, 00:19:53.960 "copy": true, 00:19:53.960 "nvme_iov_md": false 00:19:53.960 }, 00:19:53.960 "memory_domains": [ 00:19:53.960 { 00:19:53.960 "dma_device_id": "system", 00:19:53.960 "dma_device_type": 1 00:19:53.960 } 00:19:53.960 ], 00:19:53.960 "driver_specific": { 00:19:53.960 "nvme": [ 00:19:53.960 { 00:19:53.960 "trid": { 00:19:53.960 "trtype": "TCP", 00:19:53.960 "adrfam": "IPv4", 00:19:53.960 "traddr": "10.0.0.2", 00:19:53.960 "trsvcid": "4421", 00:19:53.960 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:53.960 }, 00:19:53.960 "ctrlr_data": { 00:19:53.960 "cntlid": 3, 00:19:53.961 "vendor_id": "0x8086", 00:19:53.961 "model_number": "SPDK bdev Controller", 00:19:53.961 "serial_number": "00000000000000000000", 00:19:53.961 "firmware_revision": "24.09", 00:19:53.961 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:53.961 "oacs": { 00:19:53.961 "security": 0, 00:19:53.961 "format": 0, 00:19:53.961 "firmware": 0, 00:19:53.961 "ns_manage": 0 00:19:53.961 }, 00:19:53.961 "multi_ctrlr": true, 00:19:53.961 "ana_reporting": false 00:19:53.961 }, 00:19:53.961 "vs": { 00:19:53.961 "nvme_version": "1.3" 00:19:53.961 }, 00:19:53.961 "ns_data": { 00:19:53.961 "id": 1, 00:19:53.961 "can_share": true 00:19:53.961 } 00:19:53.961 } 00:19:53.961 ], 00:19:53.961 "mp_policy": "active_passive" 00:19:53.961 } 00:19:53.961 } 00:19:53.961 ] 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.Uh6rtJ2v0p 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:53.961 rmmod nvme_tcp 00:19:53.961 rmmod nvme_fabrics 00:19:53.961 rmmod nvme_keyring 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2601949 ']' 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2601949 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 2601949 ']' 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 2601949 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:53.961 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2601949 00:19:54.221 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:54.221 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:54.221 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2601949' 00:19:54.221 killing process with pid 2601949 00:19:54.221 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 2601949 00:19:54.221 [2024-07-24 19:16:59.984287] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:54.221 [2024-07-24 19:16:59.984323] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:54.221 19:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 2601949 00:19:54.221 19:17:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:54.221 19:17:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:54.221 19:17:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:54.221 19:17:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:54.221 19:17:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:54.221 19:17:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.221 19:17:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:54.221 19:17:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.760 19:17:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:56.760 00:19:56.760 real 0m5.170s 00:19:56.760 user 0m2.066s 00:19:56.760 sys 0m1.533s 00:19:56.760 19:17:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:56.760 19:17:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:56.760 ************************************ 00:19:56.760 END TEST nvmf_async_init 00:19:56.760 ************************************ 00:19:56.760 19:17:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:56.760 19:17:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:56.760 19:17:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:56.760 19:17:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.760 ************************************ 00:19:56.760 START TEST dma 00:19:56.760 ************************************ 00:19:56.760 19:17:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:56.760 * Looking for test storage... 00:19:56.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:19:56.761 00:19:56.761 real 0m0.066s 00:19:56.761 user 0m0.032s 00:19:56.761 sys 0m0.039s 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:56.761 ************************************ 00:19:56.761 END TEST dma 00:19:56.761 ************************************ 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.761 ************************************ 00:19:56.761 START TEST nvmf_identify 00:19:56.761 ************************************ 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:56.761 * Looking for test storage... 00:19:56.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.761 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:19:56.762 19:17:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:58.142 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:58.142 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:58.142 Found net devices under 0000:08:00.0: cvl_0_0 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:58.142 Found net devices under 0000:08:00.1: cvl_0_1 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:58.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:19:58.142 00:19:58.142 --- 10.0.0.2 ping statistics --- 00:19:58.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.142 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:58.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:19:58.142 00:19:58.142 --- 10.0.0.1 ping statistics --- 00:19:58.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.142 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:58.142 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.143 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:58.143 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:58.143 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.143 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:58.143 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:58.402 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:58.402 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:58.402 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:58.402 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2603607 00:19:58.402 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:58.402 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:58.402 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2603607 00:19:58.402 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 2603607 ']' 00:19:58.402 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.402 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:58.402 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.402 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:58.402 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:58.402 [2024-07-24 19:17:04.238690] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:19:58.402 [2024-07-24 19:17:04.238788] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.402 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.402 [2024-07-24 19:17:04.304474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:58.660 [2024-07-24 19:17:04.423074] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.660 [2024-07-24 19:17:04.423124] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.660 [2024-07-24 19:17:04.423140] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.660 [2024-07-24 19:17:04.423153] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.660 [2024-07-24 19:17:04.423164] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.660 [2024-07-24 19:17:04.423273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.660 [2024-07-24 19:17:04.423319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.660 [2024-07-24 19:17:04.423383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:58.661 [2024-07-24 19:17:04.423388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:58.661 [2024-07-24 19:17:04.542735] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:58.661 Malloc0 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:58.661 [2024-07-24 19:17:04.621221] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:58.661 [ 00:19:58.661 { 00:19:58.661 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:58.661 "subtype": "Discovery", 00:19:58.661 "listen_addresses": [ 00:19:58.661 { 00:19:58.661 "trtype": "TCP", 00:19:58.661 "adrfam": "IPv4", 00:19:58.661 "traddr": "10.0.0.2", 00:19:58.661 "trsvcid": "4420" 00:19:58.661 } 00:19:58.661 ], 00:19:58.661 "allow_any_host": true, 00:19:58.661 "hosts": [] 00:19:58.661 }, 00:19:58.661 { 00:19:58.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.661 "subtype": "NVMe", 00:19:58.661 "listen_addresses": [ 00:19:58.661 { 00:19:58.661 "trtype": "TCP", 00:19:58.661 "adrfam": "IPv4", 00:19:58.661 "traddr": "10.0.0.2", 00:19:58.661 "trsvcid": "4420" 00:19:58.661 } 00:19:58.661 ], 00:19:58.661 "allow_any_host": true, 00:19:58.661 "hosts": [], 00:19:58.661 "serial_number": "SPDK00000000000001", 00:19:58.661 "model_number": "SPDK bdev Controller", 00:19:58.661 "max_namespaces": 32, 00:19:58.661 "min_cntlid": 1, 00:19:58.661 "max_cntlid": 65519, 00:19:58.661 "namespaces": [ 00:19:58.661 { 00:19:58.661 "nsid": 1, 00:19:58.661 "bdev_name": "Malloc0", 00:19:58.661 "name": "Malloc0", 00:19:58.661 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:58.661 "eui64": "ABCDEF0123456789", 00:19:58.661 "uuid": "bd1f651f-a790-4cf7-9a72-c764bc18874b" 00:19:58.661 } 00:19:58.661 ] 00:19:58.661 } 00:19:58.661 ] 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.661 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:58.661 [2024-07-24 19:17:04.665564] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:19:58.661 [2024-07-24 19:17:04.665613] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2603639 ] 00:19:58.924 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.924 [2024-07-24 19:17:04.708541] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:19:58.924 [2024-07-24 19:17:04.708615] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:58.924 [2024-07-24 19:17:04.708627] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:58.924 [2024-07-24 19:17:04.708645] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:58.924 [2024-07-24 19:17:04.708661] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:58.924 [2024-07-24 19:17:04.708901] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:19:58.924 [2024-07-24 19:17:04.708966] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xbf7400 0 00:19:58.924 [2024-07-24 19:17:04.715492] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:58.924 [2024-07-24 19:17:04.715527] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:58.924 [2024-07-24 19:17:04.715540] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:58.924 [2024-07-24 19:17:04.715547] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:58.924 [2024-07-24 19:17:04.715607] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.924 [2024-07-24 19:17:04.715622] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.924 [2024-07-24 19:17:04.715630] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf7400) 00:19:58.924 [2024-07-24 19:17:04.715659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:58.924 [2024-07-24 19:17:04.715689] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc573c0, cid 0, qid 0 00:19:58.924 [2024-07-24 19:17:04.723510] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.924 [2024-07-24 19:17:04.723530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.924 [2024-07-24 19:17:04.723538] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.924 [2024-07-24 19:17:04.723547] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc573c0) on tqpair=0xbf7400 00:19:58.924 [2024-07-24 19:17:04.723571] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:58.924 [2024-07-24 19:17:04.723584] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:19:58.924 [2024-07-24 19:17:04.723596] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:19:58.924 [2024-07-24 19:17:04.723622] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.924 [2024-07-24 19:17:04.723632] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.924 [2024-07-24 19:17:04.723640] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf7400) 00:19:58.924 [2024-07-24 19:17:04.723652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.924 [2024-07-24 19:17:04.723677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc573c0, cid 0, qid 0 00:19:58.924 [2024-07-24 19:17:04.723807] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.924 [2024-07-24 19:17:04.723823] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.924 [2024-07-24 19:17:04.723830] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.924 [2024-07-24 19:17:04.723838] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc573c0) on tqpair=0xbf7400 00:19:58.924 [2024-07-24 19:17:04.723852] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:19:58.924 [2024-07-24 19:17:04.723867] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:19:58.925 [2024-07-24 19:17:04.723881] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.925 [2024-07-24 19:17:04.723889] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.925 [2024-07-24 19:17:04.723897] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf7400) 00:19:58.925 [2024-07-24 19:17:04.723909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.925 [2024-07-24 19:17:04.723931] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc573c0, cid 0, qid 0 00:19:58.925 [2024-07-24 19:17:04.724038] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.925 [2024-07-24 19:17:04.724054] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.925 [2024-07-24 19:17:04.724061] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.925 [2024-07-24 19:17:04.724069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc573c0) on tqpair=0xbf7400 00:19:58.925 [2024-07-24 19:17:04.724079] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:19:58.925 [2024-07-24 19:17:04.724100] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:19:58.925 [2024-07-24 19:17:04.724115] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.925 [2024-07-24 19:17:04.724123] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.925 [2024-07-24 19:17:04.724130] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf7400) 00:19:58.925 [2024-07-24 19:17:04.724142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.925 [2024-07-24 19:17:04.724165] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc573c0, cid 0, qid 0 00:19:58.925 [2024-07-24 19:17:04.724272] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.925 [2024-07-24 19:17:04.724288] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.925 [2024-07-24 19:17:04.724295] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.925 [2024-07-24 19:17:04.724303] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc573c0) on tqpair=0xbf7400 00:19:58.925 [2024-07-24 19:17:04.724313] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:58.925 [2024-07-24 19:17:04.724332] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.925 [2024-07-24 19:17:04.724341] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.925 [2024-07-24 19:17:04.724349] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf7400) 00:19:58.925 [2024-07-24 19:17:04.724360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.925 [2024-07-24 19:17:04.724383] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc573c0, cid 0, qid 0 00:19:58.925 [2024-07-24 19:17:04.724499] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.925 [2024-07-24 19:17:04.724515] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.925 [2024-07-24 19:17:04.724522] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.925 [2024-07-24 19:17:04.724530] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc573c0) on tqpair=0xbf7400 00:19:58.925 [2024-07-24 19:17:04.724540] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:19:58.925 [2024-07-24 19:17:04.724550] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:19:58.925 [2024-07-24 19:17:04.724564] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:58.925 [2024-07-24 19:17:04.724676] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:19:58.925 [2024-07-24 19:17:04.724686] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:58.925 [2024-07-24 19:17:04.724703] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.925 [2024-07-24 19:17:04.724711] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.925 [2024-07-24 19:17:04.724718] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf7400) 00:19:58.925 [2024-07-24 19:17:04.724730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.925 [2024-07-24 19:17:04.724753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc573c0, cid 0, qid 0 00:19:58.925 [2024-07-24 19:17:04.724867] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.925 [2024-07-24 19:17:04.724882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.925 [2024-07-24 19:17:04.724893] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.925 [2024-07-24 19:17:04.724902] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc573c0) on tqpair=0xbf7400 00:19:58.925 [2024-07-24 19:17:04.724911] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:58.925 [2024-07-24 19:17:04.724929] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.925 [2024-07-24 19:17:04.724938] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.925 [2024-07-24 19:17:04.724951] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf7400) 00:19:58.925 [2024-07-24 19:17:04.724963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.925 [2024-07-24 19:17:04.724985] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc573c0, cid 0, qid 0 00:19:58.925 [2024-07-24 19:17:04.725094] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.925 [2024-07-24 19:17:04.725110] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.925 [2024-07-24 19:17:04.725117] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.925 [2024-07-24 19:17:04.725125] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc573c0) on tqpair=0xbf7400 00:19:58.925 [2024-07-24 19:17:04.725134] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:58.925 [2024-07-24 19:17:04.725144] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:19:58.925 [2024-07-24 19:17:04.725158] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:19:58.925 [2024-07-24 19:17:04.725180] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:19:58.925 [2024-07-24 19:17:04.725200] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.925 [2024-07-24 19:17:04.725209] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf7400) 00:19:58.925 [2024-07-24 19:17:04.725221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.925 [2024-07-24 19:17:04.725243] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc573c0, cid 0, qid 0 00:19:58.925 [2024-07-24 19:17:04.725413] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.925 [2024-07-24 19:17:04.725434] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.925 [2024-07-24 19:17:04.725443] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.925 [2024-07-24 19:17:04.725451] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf7400): datao=0, datal=4096, cccid=0 00:19:58.925 [2024-07-24 19:17:04.725460] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc573c0) on tqpair(0xbf7400): expected_datao=0, payload_size=4096 00:19:58.925 [2024-07-24 19:17:04.725469] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.925 [2024-07-24 19:17:04.725492] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.925 [2024-07-24 19:17:04.725504] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.925 [2024-07-24 19:17:04.725519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.925 [2024-07-24 19:17:04.725530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.925 [2024-07-24 19:17:04.725537] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.925 [2024-07-24 19:17:04.725545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc573c0) on tqpair=0xbf7400 00:19:58.925 [2024-07-24 19:17:04.725558] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:19:58.925 [2024-07-24 19:17:04.725568] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:19:58.925 [2024-07-24 19:17:04.725581] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:19:58.926 [2024-07-24 19:17:04.725593] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:19:58.926 [2024-07-24 19:17:04.725603] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:19:58.926 [2024-07-24 19:17:04.725612] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:19:58.926 [2024-07-24 19:17:04.725628] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:19:58.926 [2024-07-24 19:17:04.725647] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.725656] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.725664] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf7400) 00:19:58.926 [2024-07-24 19:17:04.725677] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.926 [2024-07-24 19:17:04.725700] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc573c0, cid 0, qid 0 00:19:58.926 [2024-07-24 19:17:04.725820] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.926 [2024-07-24 19:17:04.725836] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.926 [2024-07-24 19:17:04.725844] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.725851] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc573c0) on tqpair=0xbf7400 00:19:58.926 [2024-07-24 19:17:04.725871] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.725879] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.725886] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf7400) 00:19:58.926 [2024-07-24 19:17:04.725897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.926 [2024-07-24 19:17:04.725908] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.725916] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.725923] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xbf7400) 00:19:58.926 [2024-07-24 19:17:04.725933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.926 [2024-07-24 19:17:04.725944] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.725951] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.725958] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xbf7400) 00:19:58.926 [2024-07-24 19:17:04.725968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.926 [2024-07-24 19:17:04.725979] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.725987] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.725993] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7400) 00:19:58.926 [2024-07-24 19:17:04.726003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.926 [2024-07-24 19:17:04.726013] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:19:58.926 [2024-07-24 19:17:04.726034] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:58.926 [2024-07-24 19:17:04.726051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.726060] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbf7400) 00:19:58.926 [2024-07-24 19:17:04.726072] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.926 [2024-07-24 19:17:04.726096] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc573c0, cid 0, qid 0 00:19:58.926 [2024-07-24 19:17:04.726114] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc57540, cid 1, qid 0 00:19:58.926 [2024-07-24 19:17:04.726127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc576c0, cid 2, qid 0 00:19:58.926 [2024-07-24 19:17:04.726136] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc57840, cid 3, qid 0 00:19:58.926 [2024-07-24 19:17:04.726145] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc579c0, cid 4, qid 0 00:19:58.926 [2024-07-24 19:17:04.726285] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.926 [2024-07-24 19:17:04.726300] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.926 [2024-07-24 19:17:04.726308] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.726315] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc579c0) on tqpair=0xbf7400 00:19:58.926 [2024-07-24 19:17:04.726326] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:19:58.926 [2024-07-24 19:17:04.726336] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:19:58.926 [2024-07-24 19:17:04.726355] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.726365] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbf7400) 00:19:58.926 [2024-07-24 19:17:04.726377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.926 [2024-07-24 19:17:04.726399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc579c0, cid 4, qid 0 00:19:58.926 [2024-07-24 19:17:04.730497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.926 [2024-07-24 19:17:04.730516] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.926 [2024-07-24 19:17:04.730524] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.730531] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf7400): datao=0, datal=4096, cccid=4 00:19:58.926 [2024-07-24 19:17:04.730540] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc579c0) on tqpair(0xbf7400): expected_datao=0, payload_size=4096 00:19:58.926 [2024-07-24 19:17:04.730549] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.730560] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.730568] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.730578] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.926 [2024-07-24 19:17:04.730589] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.926 [2024-07-24 19:17:04.730596] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.730603] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc579c0) on tqpair=0xbf7400 00:19:58.926 [2024-07-24 19:17:04.730634] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:19:58.926 [2024-07-24 19:17:04.730678] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.730689] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbf7400) 00:19:58.926 [2024-07-24 19:17:04.730702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.926 [2024-07-24 19:17:04.730718] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.730727] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.730734] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbf7400) 00:19:58.926 [2024-07-24 19:17:04.730745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.926 [2024-07-24 19:17:04.730775] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc579c0, cid 4, qid 0 00:19:58.926 [2024-07-24 19:17:04.730794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc57b40, cid 5, qid 0 00:19:58.926 [2024-07-24 19:17:04.730947] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.926 [2024-07-24 19:17:04.730963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.926 [2024-07-24 19:17:04.730970] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.730977] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf7400): datao=0, datal=1024, cccid=4 00:19:58.926 [2024-07-24 19:17:04.730986] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc579c0) on tqpair(0xbf7400): expected_datao=0, payload_size=1024 00:19:58.926 [2024-07-24 19:17:04.730995] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.731006] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.731013] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.731023] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.926 [2024-07-24 19:17:04.731033] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.926 [2024-07-24 19:17:04.731041] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.926 [2024-07-24 19:17:04.731048] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc57b40) on tqpair=0xbf7400 00:19:58.926 [2024-07-24 19:17:04.771591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.926 [2024-07-24 19:17:04.771615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.926 [2024-07-24 19:17:04.771624] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.927 [2024-07-24 19:17:04.771632] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc579c0) on tqpair=0xbf7400 00:19:58.927 [2024-07-24 19:17:04.771653] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.927 [2024-07-24 19:17:04.771663] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbf7400) 00:19:58.927 [2024-07-24 19:17:04.771676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.927 [2024-07-24 19:17:04.771708] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc579c0, cid 4, qid 0 00:19:58.927 [2024-07-24 19:17:04.771857] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.927 [2024-07-24 19:17:04.771878] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.927 [2024-07-24 19:17:04.771887] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.927 [2024-07-24 19:17:04.771894] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf7400): datao=0, datal=3072, cccid=4 00:19:58.927 [2024-07-24 19:17:04.771902] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc579c0) on tqpair(0xbf7400): expected_datao=0, payload_size=3072 00:19:58.927 [2024-07-24 19:17:04.771911] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.927 [2024-07-24 19:17:04.771923] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.927 [2024-07-24 19:17:04.771931] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.927 [2024-07-24 19:17:04.771944] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.927 [2024-07-24 19:17:04.771955] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.927 [2024-07-24 19:17:04.771962] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.927 [2024-07-24 19:17:04.771970] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc579c0) on tqpair=0xbf7400 00:19:58.927 [2024-07-24 19:17:04.771991] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.927 [2024-07-24 19:17:04.772001] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbf7400) 00:19:58.927 [2024-07-24 19:17:04.772013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.927 [2024-07-24 19:17:04.772044] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc579c0, cid 4, qid 0 00:19:58.927 [2024-07-24 19:17:04.772185] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.927 [2024-07-24 19:17:04.772200] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.927 [2024-07-24 19:17:04.772207] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.927 [2024-07-24 19:17:04.772215] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf7400): datao=0, datal=8, cccid=4 00:19:58.927 [2024-07-24 19:17:04.772223] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc579c0) on tqpair(0xbf7400): expected_datao=0, payload_size=8 00:19:58.927 [2024-07-24 19:17:04.772232] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.927 [2024-07-24 19:17:04.772242] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.927 [2024-07-24 19:17:04.772250] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.927 [2024-07-24 19:17:04.812598] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.927 [2024-07-24 19:17:04.812623] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.927 [2024-07-24 19:17:04.812632] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.927 [2024-07-24 19:17:04.812640] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc579c0) on tqpair=0xbf7400 00:19:58.927 ===================================================== 00:19:58.927 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:58.927 ===================================================== 00:19:58.927 Controller Capabilities/Features 00:19:58.927 ================================ 00:19:58.927 Vendor ID: 0000 00:19:58.927 Subsystem Vendor ID: 0000 00:19:58.927 Serial Number: .................... 00:19:58.927 Model Number: ........................................ 00:19:58.927 Firmware Version: 24.09 00:19:58.927 Recommended Arb Burst: 0 00:19:58.927 IEEE OUI Identifier: 00 00 00 00:19:58.927 Multi-path I/O 00:19:58.927 May have multiple subsystem ports: No 00:19:58.927 May have multiple controllers: No 00:19:58.927 Associated with SR-IOV VF: No 00:19:58.927 Max Data Transfer Size: 131072 00:19:58.927 Max Number of Namespaces: 0 00:19:58.927 Max Number of I/O Queues: 1024 00:19:58.927 NVMe Specification Version (VS): 1.3 00:19:58.927 NVMe Specification Version (Identify): 1.3 00:19:58.927 Maximum Queue Entries: 128 00:19:58.927 Contiguous Queues Required: Yes 00:19:58.927 Arbitration Mechanisms Supported 00:19:58.927 Weighted Round Robin: Not Supported 00:19:58.927 Vendor Specific: Not Supported 00:19:58.927 Reset Timeout: 15000 ms 00:19:58.927 Doorbell Stride: 4 bytes 00:19:58.927 NVM Subsystem Reset: Not Supported 00:19:58.927 Command Sets Supported 00:19:58.927 NVM Command Set: Supported 00:19:58.927 Boot Partition: Not Supported 00:19:58.927 Memory Page Size Minimum: 4096 bytes 00:19:58.927 Memory Page Size Maximum: 4096 bytes 00:19:58.927 Persistent Memory Region: Not Supported 00:19:58.927 Optional Asynchronous Events Supported 00:19:58.927 Namespace Attribute Notices: Not Supported 00:19:58.927 Firmware Activation Notices: Not Supported 00:19:58.927 ANA Change Notices: Not Supported 00:19:58.927 PLE Aggregate Log Change Notices: Not Supported 00:19:58.927 LBA Status Info Alert Notices: Not Supported 00:19:58.927 EGE Aggregate Log Change Notices: Not Supported 00:19:58.927 Normal NVM Subsystem Shutdown event: Not Supported 00:19:58.927 Zone Descriptor Change Notices: Not Supported 00:19:58.927 Discovery Log Change Notices: Supported 00:19:58.927 Controller Attributes 00:19:58.927 128-bit Host Identifier: Not Supported 00:19:58.927 Non-Operational Permissive Mode: Not Supported 00:19:58.927 NVM Sets: Not Supported 00:19:58.927 Read Recovery Levels: Not Supported 00:19:58.927 Endurance Groups: Not Supported 00:19:58.927 Predictable Latency Mode: Not Supported 00:19:58.927 Traffic Based Keep ALive: Not Supported 00:19:58.927 Namespace Granularity: Not Supported 00:19:58.927 SQ Associations: Not Supported 00:19:58.927 UUID List: Not Supported 00:19:58.927 Multi-Domain Subsystem: Not Supported 00:19:58.927 Fixed Capacity Management: Not Supported 00:19:58.927 Variable Capacity Management: Not Supported 00:19:58.927 Delete Endurance Group: Not Supported 00:19:58.927 Delete NVM Set: Not Supported 00:19:58.927 Extended LBA Formats Supported: Not Supported 00:19:58.927 Flexible Data Placement Supported: Not Supported 00:19:58.927 00:19:58.927 Controller Memory Buffer Support 00:19:58.927 ================================ 00:19:58.927 Supported: No 00:19:58.927 00:19:58.927 Persistent Memory Region Support 00:19:58.927 ================================ 00:19:58.927 Supported: No 00:19:58.927 00:19:58.927 Admin Command Set Attributes 00:19:58.927 ============================ 00:19:58.927 Security Send/Receive: Not Supported 00:19:58.927 Format NVM: Not Supported 00:19:58.927 Firmware Activate/Download: Not Supported 00:19:58.927 Namespace Management: Not Supported 00:19:58.927 Device Self-Test: Not Supported 00:19:58.927 Directives: Not Supported 00:19:58.927 NVMe-MI: Not Supported 00:19:58.927 Virtualization Management: Not Supported 00:19:58.927 Doorbell Buffer Config: Not Supported 00:19:58.927 Get LBA Status Capability: Not Supported 00:19:58.927 Command & Feature Lockdown Capability: Not Supported 00:19:58.927 Abort Command Limit: 1 00:19:58.927 Async Event Request Limit: 4 00:19:58.927 Number of Firmware Slots: N/A 00:19:58.927 Firmware Slot 1 Read-Only: N/A 00:19:58.927 Firmware Activation Without Reset: N/A 00:19:58.927 Multiple Update Detection Support: N/A 00:19:58.927 Firmware Update Granularity: No Information Provided 00:19:58.927 Per-Namespace SMART Log: No 00:19:58.927 Asymmetric Namespace Access Log Page: Not Supported 00:19:58.927 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:58.927 Command Effects Log Page: Not Supported 00:19:58.927 Get Log Page Extended Data: Supported 00:19:58.927 Telemetry Log Pages: Not Supported 00:19:58.927 Persistent Event Log Pages: Not Supported 00:19:58.928 Supported Log Pages Log Page: May Support 00:19:58.928 Commands Supported & Effects Log Page: Not Supported 00:19:58.928 Feature Identifiers & Effects Log Page:May Support 00:19:58.928 NVMe-MI Commands & Effects Log Page: May Support 00:19:58.928 Data Area 4 for Telemetry Log: Not Supported 00:19:58.928 Error Log Page Entries Supported: 128 00:19:58.928 Keep Alive: Not Supported 00:19:58.928 00:19:58.928 NVM Command Set Attributes 00:19:58.928 ========================== 00:19:58.928 Submission Queue Entry Size 00:19:58.928 Max: 1 00:19:58.928 Min: 1 00:19:58.928 Completion Queue Entry Size 00:19:58.928 Max: 1 00:19:58.928 Min: 1 00:19:58.928 Number of Namespaces: 0 00:19:58.928 Compare Command: Not Supported 00:19:58.928 Write Uncorrectable Command: Not Supported 00:19:58.928 Dataset Management Command: Not Supported 00:19:58.928 Write Zeroes Command: Not Supported 00:19:58.928 Set Features Save Field: Not Supported 00:19:58.928 Reservations: Not Supported 00:19:58.928 Timestamp: Not Supported 00:19:58.928 Copy: Not Supported 00:19:58.928 Volatile Write Cache: Not Present 00:19:58.928 Atomic Write Unit (Normal): 1 00:19:58.928 Atomic Write Unit (PFail): 1 00:19:58.928 Atomic Compare & Write Unit: 1 00:19:58.928 Fused Compare & Write: Supported 00:19:58.928 Scatter-Gather List 00:19:58.928 SGL Command Set: Supported 00:19:58.928 SGL Keyed: Supported 00:19:58.928 SGL Bit Bucket Descriptor: Not Supported 00:19:58.928 SGL Metadata Pointer: Not Supported 00:19:58.928 Oversized SGL: Not Supported 00:19:58.928 SGL Metadata Address: Not Supported 00:19:58.928 SGL Offset: Supported 00:19:58.928 Transport SGL Data Block: Not Supported 00:19:58.928 Replay Protected Memory Block: Not Supported 00:19:58.928 00:19:58.928 Firmware Slot Information 00:19:58.928 ========================= 00:19:58.928 Active slot: 0 00:19:58.928 00:19:58.928 00:19:58.928 Error Log 00:19:58.928 ========= 00:19:58.928 00:19:58.928 Active Namespaces 00:19:58.928 ================= 00:19:58.928 Discovery Log Page 00:19:58.928 ================== 00:19:58.928 Generation Counter: 2 00:19:58.928 Number of Records: 2 00:19:58.928 Record Format: 0 00:19:58.928 00:19:58.928 Discovery Log Entry 0 00:19:58.928 ---------------------- 00:19:58.928 Transport Type: 3 (TCP) 00:19:58.928 Address Family: 1 (IPv4) 00:19:58.928 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:58.928 Entry Flags: 00:19:58.928 Duplicate Returned Information: 1 00:19:58.928 Explicit Persistent Connection Support for Discovery: 1 00:19:58.928 Transport Requirements: 00:19:58.928 Secure Channel: Not Required 00:19:58.928 Port ID: 0 (0x0000) 00:19:58.928 Controller ID: 65535 (0xffff) 00:19:58.928 Admin Max SQ Size: 128 00:19:58.928 Transport Service Identifier: 4420 00:19:58.928 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:58.928 Transport Address: 10.0.0.2 00:19:58.928 Discovery Log Entry 1 00:19:58.928 ---------------------- 00:19:58.928 Transport Type: 3 (TCP) 00:19:58.928 Address Family: 1 (IPv4) 00:19:58.928 Subsystem Type: 2 (NVM Subsystem) 00:19:58.928 Entry Flags: 00:19:58.928 Duplicate Returned Information: 0 00:19:58.928 Explicit Persistent Connection Support for Discovery: 0 00:19:58.928 Transport Requirements: 00:19:58.928 Secure Channel: Not Required 00:19:58.928 Port ID: 0 (0x0000) 00:19:58.928 Controller ID: 65535 (0xffff) 00:19:58.928 Admin Max SQ Size: 128 00:19:58.928 Transport Service Identifier: 4420 00:19:58.928 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:58.928 Transport Address: 10.0.0.2 [2024-07-24 19:17:04.812780] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:19:58.928 [2024-07-24 19:17:04.812804] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc573c0) on tqpair=0xbf7400 00:19:58.928 [2024-07-24 19:17:04.812818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.928 [2024-07-24 19:17:04.812828] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc57540) on tqpair=0xbf7400 00:19:58.928 [2024-07-24 19:17:04.812837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.928 [2024-07-24 19:17:04.812846] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc576c0) on tqpair=0xbf7400 00:19:58.928 [2024-07-24 19:17:04.812855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.928 [2024-07-24 19:17:04.812864] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc57840) on tqpair=0xbf7400 00:19:58.928 [2024-07-24 19:17:04.812872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.928 [2024-07-24 19:17:04.812893] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.928 [2024-07-24 19:17:04.812903] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.928 [2024-07-24 19:17:04.812910] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7400) 00:19:58.928 [2024-07-24 19:17:04.812922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.928 [2024-07-24 19:17:04.812951] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc57840, cid 3, qid 0 00:19:58.928 [2024-07-24 19:17:04.813052] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.928 [2024-07-24 19:17:04.813068] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.928 [2024-07-24 19:17:04.813076] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.928 [2024-07-24 19:17:04.813084] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc57840) on tqpair=0xbf7400 00:19:58.928 [2024-07-24 19:17:04.813102] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.928 [2024-07-24 19:17:04.813112] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.928 [2024-07-24 19:17:04.813119] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7400) 00:19:58.928 [2024-07-24 19:17:04.813131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.928 [2024-07-24 19:17:04.813159] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc57840, cid 3, qid 0 00:19:58.928 [2024-07-24 19:17:04.813288] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.928 [2024-07-24 19:17:04.813302] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.928 [2024-07-24 19:17:04.813310] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.928 [2024-07-24 19:17:04.813318] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc57840) on tqpair=0xbf7400 00:19:58.928 [2024-07-24 19:17:04.813328] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:19:58.928 [2024-07-24 19:17:04.813337] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:19:58.928 [2024-07-24 19:17:04.813354] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.928 [2024-07-24 19:17:04.813364] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.928 [2024-07-24 19:17:04.813371] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7400) 00:19:58.928 [2024-07-24 19:17:04.813383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.928 [2024-07-24 19:17:04.813406] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc57840, cid 3, qid 0 00:19:58.928 [2024-07-24 19:17:04.813518] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.928 [2024-07-24 19:17:04.813534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.928 [2024-07-24 19:17:04.813541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.928 [2024-07-24 19:17:04.813549] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc57840) on tqpair=0xbf7400 00:19:58.928 [2024-07-24 19:17:04.813569] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.928 [2024-07-24 19:17:04.813580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.928 [2024-07-24 19:17:04.813587] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7400) 00:19:58.928 [2024-07-24 19:17:04.813599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.928 [2024-07-24 19:17:04.813622] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc57840, cid 3, qid 0 00:19:58.929 [2024-07-24 19:17:04.813730] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.929 [2024-07-24 19:17:04.813746] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.929 [2024-07-24 19:17:04.813754] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.929 [2024-07-24 19:17:04.813761] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc57840) on tqpair=0xbf7400 00:19:58.929 [2024-07-24 19:17:04.813779] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.929 [2024-07-24 19:17:04.813790] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.929 [2024-07-24 19:17:04.813797] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7400) 00:19:58.929 [2024-07-24 19:17:04.813809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.929 [2024-07-24 19:17:04.813833] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc57840, cid 3, qid 0 00:19:58.929 [2024-07-24 19:17:04.813950] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.929 [2024-07-24 19:17:04.813965] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.929 [2024-07-24 19:17:04.813976] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.929 [2024-07-24 19:17:04.813985] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc57840) on tqpair=0xbf7400 00:19:58.929 [2024-07-24 19:17:04.814003] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.929 [2024-07-24 19:17:04.814013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.929 [2024-07-24 19:17:04.814020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7400) 00:19:58.929 [2024-07-24 19:17:04.814032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.929 [2024-07-24 19:17:04.814056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc57840, cid 3, qid 0 00:19:58.929 [2024-07-24 19:17:04.814166] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.929 [2024-07-24 19:17:04.814181] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.929 [2024-07-24 19:17:04.814189] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.929 [2024-07-24 19:17:04.814196] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc57840) on tqpair=0xbf7400 00:19:58.929 [2024-07-24 19:17:04.814214] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.929 [2024-07-24 19:17:04.814224] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.929 [2024-07-24 19:17:04.814231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7400) 00:19:58.929 [2024-07-24 19:17:04.814243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.929 [2024-07-24 19:17:04.814265] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc57840, cid 3, qid 0 00:19:58.929 [2024-07-24 19:17:04.814372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.929 [2024-07-24 19:17:04.814388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.929 [2024-07-24 19:17:04.814396] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.929 [2024-07-24 19:17:04.814403] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc57840) on tqpair=0xbf7400 00:19:58.929 [2024-07-24 19:17:04.814422] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.929 [2024-07-24 19:17:04.814432] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.929 [2024-07-24 19:17:04.814439] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7400) 00:19:58.929 [2024-07-24 19:17:04.814451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.929 [2024-07-24 19:17:04.814475] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc57840, cid 3, qid 0 00:19:58.929 [2024-07-24 19:17:04.818512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.929 [2024-07-24 19:17:04.818535] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.929 [2024-07-24 19:17:04.818543] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.929 [2024-07-24 19:17:04.818551] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc57840) on tqpair=0xbf7400 00:19:58.929 [2024-07-24 19:17:04.818571] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.929 [2024-07-24 19:17:04.818581] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.929 [2024-07-24 19:17:04.818588] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf7400) 00:19:58.929 [2024-07-24 19:17:04.818600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.929 [2024-07-24 19:17:04.818625] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc57840, cid 3, qid 0 00:19:58.929 [2024-07-24 19:17:04.818731] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.929 [2024-07-24 19:17:04.818748] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.929 [2024-07-24 19:17:04.818755] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.929 [2024-07-24 19:17:04.818769] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc57840) on tqpair=0xbf7400 00:19:58.929 [2024-07-24 19:17:04.818785] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:19:58.929 00:19:58.929 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:58.929 [2024-07-24 19:17:04.855126] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:19:58.929 [2024-07-24 19:17:04.855176] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2603641 ] 00:19:58.929 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.929 [2024-07-24 19:17:04.896015] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:19:58.929 [2024-07-24 19:17:04.896082] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:58.929 [2024-07-24 19:17:04.896093] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:58.929 [2024-07-24 19:17:04.896109] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:58.929 [2024-07-24 19:17:04.896124] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:58.929 [2024-07-24 19:17:04.896337] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:19:58.929 [2024-07-24 19:17:04.896377] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x21b1400 0 00:19:58.929 [2024-07-24 19:17:04.910496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:58.929 [2024-07-24 19:17:04.910523] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:58.929 [2024-07-24 19:17:04.910533] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:58.929 [2024-07-24 19:17:04.910540] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:58.929 [2024-07-24 19:17:04.910587] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.929 [2024-07-24 19:17:04.910599] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.929 [2024-07-24 19:17:04.910607] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21b1400) 00:19:58.929 [2024-07-24 19:17:04.910623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:58.929 [2024-07-24 19:17:04.910651] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22113c0, cid 0, qid 0 00:19:58.929 [2024-07-24 19:17:04.918497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.929 [2024-07-24 19:17:04.918516] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.929 [2024-07-24 19:17:04.918525] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.929 [2024-07-24 19:17:04.918533] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22113c0) on tqpair=0x21b1400 00:19:58.929 [2024-07-24 19:17:04.918553] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:58.929 [2024-07-24 19:17:04.918566] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:19:58.929 [2024-07-24 19:17:04.918576] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:19:58.929 [2024-07-24 19:17:04.918600] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.929 [2024-07-24 19:17:04.918609] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.929 [2024-07-24 19:17:04.918621] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21b1400) 00:19:58.929 [2024-07-24 19:17:04.918640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.929 [2024-07-24 19:17:04.918665] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22113c0, cid 0, qid 0 00:19:58.930 [2024-07-24 19:17:04.918808] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.930 [2024-07-24 19:17:04.918824] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.930 [2024-07-24 19:17:04.918832] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.930 [2024-07-24 19:17:04.918840] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22113c0) on tqpair=0x21b1400 00:19:58.930 [2024-07-24 19:17:04.918854] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:19:58.930 [2024-07-24 19:17:04.918870] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:19:58.930 [2024-07-24 19:17:04.918884] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.930 [2024-07-24 19:17:04.918893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.930 [2024-07-24 19:17:04.918900] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21b1400) 00:19:58.930 [2024-07-24 19:17:04.918913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.930 [2024-07-24 19:17:04.918936] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22113c0, cid 0, qid 0 00:19:58.930 [2024-07-24 19:17:04.919056] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.930 [2024-07-24 19:17:04.919069] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.930 [2024-07-24 19:17:04.919077] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.930 [2024-07-24 19:17:04.919084] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22113c0) on tqpair=0x21b1400 00:19:58.930 [2024-07-24 19:17:04.919094] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:19:58.930 [2024-07-24 19:17:04.919109] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:19:58.930 [2024-07-24 19:17:04.919122] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.930 [2024-07-24 19:17:04.919130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.930 [2024-07-24 19:17:04.919138] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21b1400) 00:19:58.930 [2024-07-24 19:17:04.919150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.930 [2024-07-24 19:17:04.919172] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22113c0, cid 0, qid 0 00:19:58.930 [2024-07-24 19:17:04.919298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.930 [2024-07-24 19:17:04.919313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.930 [2024-07-24 19:17:04.919321] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.930 [2024-07-24 19:17:04.919329] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22113c0) on tqpair=0x21b1400 00:19:58.930 [2024-07-24 19:17:04.919338] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:58.930 [2024-07-24 19:17:04.919356] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.930 [2024-07-24 19:17:04.919366] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.930 [2024-07-24 19:17:04.919373] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21b1400) 00:19:58.930 [2024-07-24 19:17:04.919385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.930 [2024-07-24 19:17:04.919407] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22113c0, cid 0, qid 0 00:19:58.930 [2024-07-24 19:17:04.919543] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.930 [2024-07-24 19:17:04.919558] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.930 [2024-07-24 19:17:04.919566] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.930 [2024-07-24 19:17:04.919574] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22113c0) on tqpair=0x21b1400 00:19:58.930 [2024-07-24 19:17:04.919582] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:19:58.930 [2024-07-24 19:17:04.919591] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:19:58.930 [2024-07-24 19:17:04.919606] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:58.930 [2024-07-24 19:17:04.919716] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:19:58.930 [2024-07-24 19:17:04.919730] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:58.930 [2024-07-24 19:17:04.919744] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.930 [2024-07-24 19:17:04.919752] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.930 [2024-07-24 19:17:04.919760] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21b1400) 00:19:58.930 [2024-07-24 19:17:04.919771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.930 [2024-07-24 19:17:04.919794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22113c0, cid 0, qid 0 00:19:58.930 [2024-07-24 19:17:04.919925] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.930 [2024-07-24 19:17:04.919938] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.930 [2024-07-24 19:17:04.919945] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.930 [2024-07-24 19:17:04.919953] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22113c0) on tqpair=0x21b1400 00:19:58.930 [2024-07-24 19:17:04.919962] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:58.930 [2024-07-24 19:17:04.919979] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.930 [2024-07-24 19:17:04.919989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.930 [2024-07-24 19:17:04.919996] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21b1400) 00:19:58.930 [2024-07-24 19:17:04.920009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.930 [2024-07-24 19:17:04.920033] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22113c0, cid 0, qid 0 00:19:58.930 [2024-07-24 19:17:04.920143] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.930 [2024-07-24 19:17:04.920156] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.930 [2024-07-24 19:17:04.920164] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.930 [2024-07-24 19:17:04.920171] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22113c0) on tqpair=0x21b1400 00:19:58.930 [2024-07-24 19:17:04.920180] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:58.930 [2024-07-24 19:17:04.920189] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:19:58.930 [2024-07-24 19:17:04.920203] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:19:58.930 [2024-07-24 19:17:04.920219] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:19:58.930 [2024-07-24 19:17:04.920238] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.930 [2024-07-24 19:17:04.920247] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21b1400) 00:19:58.930 [2024-07-24 19:17:04.920259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.930 [2024-07-24 19:17:04.920282] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22113c0, cid 0, qid 0 00:19:58.930 [2024-07-24 19:17:04.920409] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.930 [2024-07-24 19:17:04.920422] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.930 [2024-07-24 19:17:04.920430] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.930 [2024-07-24 19:17:04.920437] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21b1400): datao=0, datal=4096, cccid=0 00:19:58.930 [2024-07-24 19:17:04.920446] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22113c0) on tqpair(0x21b1400): expected_datao=0, payload_size=4096 00:19:58.930 [2024-07-24 19:17:04.920454] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.930 [2024-07-24 19:17:04.920466] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.920475] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.920496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.931 [2024-07-24 19:17:04.920508] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.931 [2024-07-24 19:17:04.920515] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.920523] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22113c0) on tqpair=0x21b1400 00:19:58.931 [2024-07-24 19:17:04.920535] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:19:58.931 [2024-07-24 19:17:04.920544] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:19:58.931 [2024-07-24 19:17:04.920553] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:19:58.931 [2024-07-24 19:17:04.920561] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:19:58.931 [2024-07-24 19:17:04.920569] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:19:58.931 [2024-07-24 19:17:04.920578] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:19:58.931 [2024-07-24 19:17:04.920594] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:19:58.931 [2024-07-24 19:17:04.920612] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.920622] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.920629] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21b1400) 00:19:58.931 [2024-07-24 19:17:04.920641] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.931 [2024-07-24 19:17:04.920664] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22113c0, cid 0, qid 0 00:19:58.931 [2024-07-24 19:17:04.920769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.931 [2024-07-24 19:17:04.920781] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.931 [2024-07-24 19:17:04.920789] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.920796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22113c0) on tqpair=0x21b1400 00:19:58.931 [2024-07-24 19:17:04.920808] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.920816] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.920823] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21b1400) 00:19:58.931 [2024-07-24 19:17:04.920839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.931 [2024-07-24 19:17:04.920851] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.920865] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.920873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x21b1400) 00:19:58.931 [2024-07-24 19:17:04.920883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.931 [2024-07-24 19:17:04.920893] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.920901] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.920908] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x21b1400) 00:19:58.931 [2024-07-24 19:17:04.920918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.931 [2024-07-24 19:17:04.920928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.920936] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.920943] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21b1400) 00:19:58.931 [2024-07-24 19:17:04.920953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.931 [2024-07-24 19:17:04.920962] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:58.931 [2024-07-24 19:17:04.920982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:58.931 [2024-07-24 19:17:04.920996] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.921004] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21b1400) 00:19:58.931 [2024-07-24 19:17:04.921016] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.931 [2024-07-24 19:17:04.921039] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22113c0, cid 0, qid 0 00:19:58.931 [2024-07-24 19:17:04.921051] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2211540, cid 1, qid 0 00:19:58.931 [2024-07-24 19:17:04.921060] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22116c0, cid 2, qid 0 00:19:58.931 [2024-07-24 19:17:04.921069] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2211840, cid 3, qid 0 00:19:58.931 [2024-07-24 19:17:04.921078] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22119c0, cid 4, qid 0 00:19:58.931 [2024-07-24 19:17:04.921212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.931 [2024-07-24 19:17:04.921227] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.931 [2024-07-24 19:17:04.921235] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.921242] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22119c0) on tqpair=0x21b1400 00:19:58.931 [2024-07-24 19:17:04.921251] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:19:58.931 [2024-07-24 19:17:04.921261] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:58.931 [2024-07-24 19:17:04.921282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:19:58.931 [2024-07-24 19:17:04.921298] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:58.931 [2024-07-24 19:17:04.921314] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.921324] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.921333] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21b1400) 00:19:58.931 [2024-07-24 19:17:04.921345] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.931 [2024-07-24 19:17:04.921369] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22119c0, cid 4, qid 0 00:19:58.931 [2024-07-24 19:17:04.921472] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.931 [2024-07-24 19:17:04.921497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.931 [2024-07-24 19:17:04.921505] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.921523] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22119c0) on tqpair=0x21b1400 00:19:58.931 [2024-07-24 19:17:04.921607] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:19:58.931 [2024-07-24 19:17:04.921630] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:58.931 [2024-07-24 19:17:04.921647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.921655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21b1400) 00:19:58.931 [2024-07-24 19:17:04.921667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.931 [2024-07-24 19:17:04.921690] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22119c0, cid 4, qid 0 00:19:58.931 [2024-07-24 19:17:04.921802] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.931 [2024-07-24 19:17:04.921817] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.931 [2024-07-24 19:17:04.921825] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.921832] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21b1400): datao=0, datal=4096, cccid=4 00:19:58.931 [2024-07-24 19:17:04.921841] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22119c0) on tqpair(0x21b1400): expected_datao=0, payload_size=4096 00:19:58.931 [2024-07-24 19:17:04.921849] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.921868] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.921878] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.921909] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.931 [2024-07-24 19:17:04.921921] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.931 [2024-07-24 19:17:04.921929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.921936] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22119c0) on tqpair=0x21b1400 00:19:58.931 [2024-07-24 19:17:04.921954] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:19:58.931 [2024-07-24 19:17:04.921978] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:19:58.931 [2024-07-24 19:17:04.921997] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:19:58.931 [2024-07-24 19:17:04.922013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.922021] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21b1400) 00:19:58.931 [2024-07-24 19:17:04.922033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.931 [2024-07-24 19:17:04.922056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22119c0, cid 4, qid 0 00:19:58.931 [2024-07-24 19:17:04.922176] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.931 [2024-07-24 19:17:04.922190] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.931 [2024-07-24 19:17:04.922198] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.931 [2024-07-24 19:17:04.922205] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21b1400): datao=0, datal=4096, cccid=4 00:19:58.932 [2024-07-24 19:17:04.922213] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22119c0) on tqpair(0x21b1400): expected_datao=0, payload_size=4096 00:19:58.932 [2024-07-24 19:17:04.922222] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.922240] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.922249] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.922268] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.932 [2024-07-24 19:17:04.922279] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.932 [2024-07-24 19:17:04.922286] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.922294] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22119c0) on tqpair=0x21b1400 00:19:58.932 [2024-07-24 19:17:04.922319] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:58.932 [2024-07-24 19:17:04.922340] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:58.932 [2024-07-24 19:17:04.922356] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.922364] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21b1400) 00:19:58.932 [2024-07-24 19:17:04.922376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.932 [2024-07-24 19:17:04.922399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22119c0, cid 4, qid 0 00:19:58.932 [2024-07-24 19:17:04.926495] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.932 [2024-07-24 19:17:04.926514] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.932 [2024-07-24 19:17:04.926522] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.926529] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21b1400): datao=0, datal=4096, cccid=4 00:19:58.932 [2024-07-24 19:17:04.926538] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22119c0) on tqpair(0x21b1400): expected_datao=0, payload_size=4096 00:19:58.932 [2024-07-24 19:17:04.926546] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.926557] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.926566] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.926575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.932 [2024-07-24 19:17:04.926586] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.932 [2024-07-24 19:17:04.926593] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.926600] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22119c0) on tqpair=0x21b1400 00:19:58.932 [2024-07-24 19:17:04.926616] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:58.932 [2024-07-24 19:17:04.926633] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:19:58.932 [2024-07-24 19:17:04.926650] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:19:58.932 [2024-07-24 19:17:04.926674] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:58.932 [2024-07-24 19:17:04.926688] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:58.932 [2024-07-24 19:17:04.926698] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:19:58.932 [2024-07-24 19:17:04.926708] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:19:58.932 [2024-07-24 19:17:04.926717] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:19:58.932 [2024-07-24 19:17:04.926727] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:19:58.932 [2024-07-24 19:17:04.926747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.926756] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21b1400) 00:19:58.932 [2024-07-24 19:17:04.926768] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.932 [2024-07-24 19:17:04.926781] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.926788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.926796] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21b1400) 00:19:58.932 [2024-07-24 19:17:04.926806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.932 [2024-07-24 19:17:04.926834] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22119c0, cid 4, qid 0 00:19:58.932 [2024-07-24 19:17:04.926846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2211b40, cid 5, qid 0 00:19:58.932 [2024-07-24 19:17:04.926957] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.932 [2024-07-24 19:17:04.926970] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.932 [2024-07-24 19:17:04.926978] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.926986] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22119c0) on tqpair=0x21b1400 00:19:58.932 [2024-07-24 19:17:04.926997] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.932 [2024-07-24 19:17:04.927007] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.932 [2024-07-24 19:17:04.927015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.927022] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2211b40) on tqpair=0x21b1400 00:19:58.932 [2024-07-24 19:17:04.927039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.927048] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21b1400) 00:19:58.932 [2024-07-24 19:17:04.927060] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.932 [2024-07-24 19:17:04.927082] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2211b40, cid 5, qid 0 00:19:58.932 [2024-07-24 19:17:04.927183] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.932 [2024-07-24 19:17:04.927199] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.932 [2024-07-24 19:17:04.927206] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.927214] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2211b40) on tqpair=0x21b1400 00:19:58.932 [2024-07-24 19:17:04.927231] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.927241] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21b1400) 00:19:58.932 [2024-07-24 19:17:04.927252] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.932 [2024-07-24 19:17:04.927279] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2211b40, cid 5, qid 0 00:19:58.932 [2024-07-24 19:17:04.927377] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.932 [2024-07-24 19:17:04.927392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.932 [2024-07-24 19:17:04.927400] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.927408] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2211b40) on tqpair=0x21b1400 00:19:58.932 [2024-07-24 19:17:04.927425] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.927434] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21b1400) 00:19:58.932 [2024-07-24 19:17:04.927446] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.932 [2024-07-24 19:17:04.927468] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2211b40, cid 5, qid 0 00:19:58.932 [2024-07-24 19:17:04.927573] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.932 [2024-07-24 19:17:04.927588] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.932 [2024-07-24 19:17:04.927595] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.927603] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2211b40) on tqpair=0x21b1400 00:19:58.932 [2024-07-24 19:17:04.927628] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.927640] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21b1400) 00:19:58.932 [2024-07-24 19:17:04.927651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.932 [2024-07-24 19:17:04.927665] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.927673] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21b1400) 00:19:58.932 [2024-07-24 19:17:04.927684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.932 [2024-07-24 19:17:04.927697] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.927706] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x21b1400) 00:19:58.932 [2024-07-24 19:17:04.927716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.932 [2024-07-24 19:17:04.927730] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.932 [2024-07-24 19:17:04.927738] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x21b1400) 00:19:58.932 [2024-07-24 19:17:04.927749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.932 [2024-07-24 19:17:04.927772] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2211b40, cid 5, qid 0 00:19:58.932 [2024-07-24 19:17:04.927784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22119c0, cid 4, qid 0 00:19:58.932 [2024-07-24 19:17:04.927793] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2211cc0, cid 6, qid 0 00:19:58.933 [2024-07-24 19:17:04.927802] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2211e40, cid 7, qid 0 00:19:58.933 [2024-07-24 19:17:04.928008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.933 [2024-07-24 19:17:04.928024] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.933 [2024-07-24 19:17:04.928032] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.933 [2024-07-24 19:17:04.928039] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21b1400): datao=0, datal=8192, cccid=5 00:19:58.933 [2024-07-24 19:17:04.928048] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2211b40) on tqpair(0x21b1400): expected_datao=0, payload_size=8192 00:19:58.933 [2024-07-24 19:17:04.928061] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.933 [2024-07-24 19:17:04.928083] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.933 [2024-07-24 19:17:04.928092] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.933 [2024-07-24 19:17:04.928106] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.933 [2024-07-24 19:17:04.928117] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.933 [2024-07-24 19:17:04.928124] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.933 [2024-07-24 19:17:04.928131] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21b1400): datao=0, datal=512, cccid=4 00:19:58.933 [2024-07-24 19:17:04.928140] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22119c0) on tqpair(0x21b1400): expected_datao=0, payload_size=512 00:19:58.933 [2024-07-24 19:17:04.928148] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.933 [2024-07-24 19:17:04.928159] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.933 [2024-07-24 19:17:04.928167] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.933 [2024-07-24 19:17:04.928176] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.933 [2024-07-24 19:17:04.928186] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.933 [2024-07-24 19:17:04.928193] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.933 [2024-07-24 19:17:04.928201] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21b1400): datao=0, datal=512, cccid=6 00:19:58.933 [2024-07-24 19:17:04.928209] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2211cc0) on tqpair(0x21b1400): expected_datao=0, payload_size=512 00:19:58.933 [2024-07-24 19:17:04.928217] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.933 [2024-07-24 19:17:04.928228] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.933 [2024-07-24 19:17:04.928236] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.933 [2024-07-24 19:17:04.928245] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.933 [2024-07-24 19:17:04.928255] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.933 [2024-07-24 19:17:04.928263] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.933 [2024-07-24 19:17:04.928270] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21b1400): datao=0, datal=4096, cccid=7 00:19:58.933 [2024-07-24 19:17:04.928278] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2211e40) on tqpair(0x21b1400): expected_datao=0, payload_size=4096 00:19:58.933 [2024-07-24 19:17:04.928287] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.933 [2024-07-24 19:17:04.928297] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.933 [2024-07-24 19:17:04.928306] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.933 [2024-07-24 19:17:04.928318] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.933 [2024-07-24 19:17:04.928329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.933 [2024-07-24 19:17:04.928336] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.933 [2024-07-24 19:17:04.928344] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2211b40) on tqpair=0x21b1400 00:19:58.933 [2024-07-24 19:17:04.928365] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.933 [2024-07-24 19:17:04.928377] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.933 [2024-07-24 19:17:04.928385] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.933 [2024-07-24 19:17:04.928392] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22119c0) on tqpair=0x21b1400 00:19:58.933 [2024-07-24 19:17:04.928410] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.933 [2024-07-24 19:17:04.928422] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.933 [2024-07-24 19:17:04.928429] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.933 [2024-07-24 19:17:04.928441] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2211cc0) on tqpair=0x21b1400 00:19:58.933 [2024-07-24 19:17:04.928453] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.933 [2024-07-24 19:17:04.928464] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.933 [2024-07-24 19:17:04.928471] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.933 [2024-07-24 19:17:04.928487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2211e40) on tqpair=0x21b1400 00:19:58.933 ===================================================== 00:19:58.933 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:58.933 ===================================================== 00:19:58.933 Controller Capabilities/Features 00:19:58.933 ================================ 00:19:58.933 Vendor ID: 8086 00:19:58.933 Subsystem Vendor ID: 8086 00:19:58.933 Serial Number: SPDK00000000000001 00:19:58.933 Model Number: SPDK bdev Controller 00:19:58.933 Firmware Version: 24.09 00:19:58.933 Recommended Arb Burst: 6 00:19:58.933 IEEE OUI Identifier: e4 d2 5c 00:19:58.933 Multi-path I/O 00:19:58.933 May have multiple subsystem ports: Yes 00:19:58.933 May have multiple controllers: Yes 00:19:58.933 Associated with SR-IOV VF: No 00:19:58.933 Max Data Transfer Size: 131072 00:19:58.933 Max Number of Namespaces: 32 00:19:58.933 Max Number of I/O Queues: 127 00:19:58.933 NVMe Specification Version (VS): 1.3 00:19:58.933 NVMe Specification Version (Identify): 1.3 00:19:58.933 Maximum Queue Entries: 128 00:19:58.933 Contiguous Queues Required: Yes 00:19:58.933 Arbitration Mechanisms Supported 00:19:58.933 Weighted Round Robin: Not Supported 00:19:58.933 Vendor Specific: Not Supported 00:19:58.933 Reset Timeout: 15000 ms 00:19:58.933 Doorbell Stride: 4 bytes 00:19:58.933 NVM Subsystem Reset: Not Supported 00:19:58.933 Command Sets Supported 00:19:58.933 NVM Command Set: Supported 00:19:58.933 Boot Partition: Not Supported 00:19:58.933 Memory Page Size Minimum: 4096 bytes 00:19:58.933 Memory Page Size Maximum: 4096 bytes 00:19:58.933 Persistent Memory Region: Not Supported 00:19:58.933 Optional Asynchronous Events Supported 00:19:58.933 Namespace Attribute Notices: Supported 00:19:58.933 Firmware Activation Notices: Not Supported 00:19:58.933 ANA Change Notices: Not Supported 00:19:58.933 PLE Aggregate Log Change Notices: Not Supported 00:19:58.933 LBA Status Info Alert Notices: Not Supported 00:19:58.933 EGE Aggregate Log Change Notices: Not Supported 00:19:58.933 Normal NVM Subsystem Shutdown event: Not Supported 00:19:58.933 Zone Descriptor Change Notices: Not Supported 00:19:58.933 Discovery Log Change Notices: Not Supported 00:19:58.933 Controller Attributes 00:19:58.933 128-bit Host Identifier: Supported 00:19:58.933 Non-Operational Permissive Mode: Not Supported 00:19:58.933 NVM Sets: Not Supported 00:19:58.933 Read Recovery Levels: Not Supported 00:19:58.933 Endurance Groups: Not Supported 00:19:58.933 Predictable Latency Mode: Not Supported 00:19:58.933 Traffic Based Keep ALive: Not Supported 00:19:58.933 Namespace Granularity: Not Supported 00:19:58.933 SQ Associations: Not Supported 00:19:58.933 UUID List: Not Supported 00:19:58.933 Multi-Domain Subsystem: Not Supported 00:19:58.933 Fixed Capacity Management: Not Supported 00:19:58.933 Variable Capacity Management: Not Supported 00:19:58.933 Delete Endurance Group: Not Supported 00:19:58.933 Delete NVM Set: Not Supported 00:19:58.933 Extended LBA Formats Supported: Not Supported 00:19:58.933 Flexible Data Placement Supported: Not Supported 00:19:58.933 00:19:58.933 Controller Memory Buffer Support 00:19:58.933 ================================ 00:19:58.933 Supported: No 00:19:58.933 00:19:58.933 Persistent Memory Region Support 00:19:58.933 ================================ 00:19:58.933 Supported: No 00:19:58.933 00:19:58.933 Admin Command Set Attributes 00:19:58.933 ============================ 00:19:58.933 Security Send/Receive: Not Supported 00:19:58.933 Format NVM: Not Supported 00:19:58.933 Firmware Activate/Download: Not Supported 00:19:58.933 Namespace Management: Not Supported 00:19:58.933 Device Self-Test: Not Supported 00:19:58.933 Directives: Not Supported 00:19:58.933 NVMe-MI: Not Supported 00:19:58.933 Virtualization Management: Not Supported 00:19:58.933 Doorbell Buffer Config: Not Supported 00:19:58.933 Get LBA Status Capability: Not Supported 00:19:58.933 Command & Feature Lockdown Capability: Not Supported 00:19:58.933 Abort Command Limit: 4 00:19:58.933 Async Event Request Limit: 4 00:19:58.933 Number of Firmware Slots: N/A 00:19:58.933 Firmware Slot 1 Read-Only: N/A 00:19:58.933 Firmware Activation Without Reset: N/A 00:19:58.933 Multiple Update Detection Support: N/A 00:19:58.933 Firmware Update Granularity: No Information Provided 00:19:58.933 Per-Namespace SMART Log: No 00:19:58.933 Asymmetric Namespace Access Log Page: Not Supported 00:19:58.933 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:58.933 Command Effects Log Page: Supported 00:19:58.934 Get Log Page Extended Data: Supported 00:19:58.934 Telemetry Log Pages: Not Supported 00:19:58.934 Persistent Event Log Pages: Not Supported 00:19:58.934 Supported Log Pages Log Page: May Support 00:19:58.934 Commands Supported & Effects Log Page: Not Supported 00:19:58.934 Feature Identifiers & Effects Log Page:May Support 00:19:58.934 NVMe-MI Commands & Effects Log Page: May Support 00:19:58.934 Data Area 4 for Telemetry Log: Not Supported 00:19:58.934 Error Log Page Entries Supported: 128 00:19:58.934 Keep Alive: Supported 00:19:58.934 Keep Alive Granularity: 10000 ms 00:19:58.934 00:19:58.934 NVM Command Set Attributes 00:19:58.934 ========================== 00:19:58.934 Submission Queue Entry Size 00:19:58.934 Max: 64 00:19:58.934 Min: 64 00:19:58.934 Completion Queue Entry Size 00:19:58.934 Max: 16 00:19:58.934 Min: 16 00:19:58.934 Number of Namespaces: 32 00:19:58.934 Compare Command: Supported 00:19:58.934 Write Uncorrectable Command: Not Supported 00:19:58.934 Dataset Management Command: Supported 00:19:58.934 Write Zeroes Command: Supported 00:19:58.934 Set Features Save Field: Not Supported 00:19:58.934 Reservations: Supported 00:19:58.934 Timestamp: Not Supported 00:19:58.934 Copy: Supported 00:19:58.934 Volatile Write Cache: Present 00:19:58.934 Atomic Write Unit (Normal): 1 00:19:58.934 Atomic Write Unit (PFail): 1 00:19:58.934 Atomic Compare & Write Unit: 1 00:19:58.934 Fused Compare & Write: Supported 00:19:58.934 Scatter-Gather List 00:19:58.934 SGL Command Set: Supported 00:19:58.934 SGL Keyed: Supported 00:19:58.934 SGL Bit Bucket Descriptor: Not Supported 00:19:58.934 SGL Metadata Pointer: Not Supported 00:19:58.934 Oversized SGL: Not Supported 00:19:58.934 SGL Metadata Address: Not Supported 00:19:58.934 SGL Offset: Supported 00:19:58.934 Transport SGL Data Block: Not Supported 00:19:58.934 Replay Protected Memory Block: Not Supported 00:19:58.934 00:19:58.934 Firmware Slot Information 00:19:58.934 ========================= 00:19:58.934 Active slot: 1 00:19:58.934 Slot 1 Firmware Revision: 24.09 00:19:58.934 00:19:58.934 00:19:58.934 Commands Supported and Effects 00:19:58.934 ============================== 00:19:58.934 Admin Commands 00:19:58.934 -------------- 00:19:58.934 Get Log Page (02h): Supported 00:19:58.934 Identify (06h): Supported 00:19:58.934 Abort (08h): Supported 00:19:58.934 Set Features (09h): Supported 00:19:58.934 Get Features (0Ah): Supported 00:19:58.934 Asynchronous Event Request (0Ch): Supported 00:19:58.934 Keep Alive (18h): Supported 00:19:58.934 I/O Commands 00:19:58.934 ------------ 00:19:58.934 Flush (00h): Supported LBA-Change 00:19:58.934 Write (01h): Supported LBA-Change 00:19:58.934 Read (02h): Supported 00:19:58.934 Compare (05h): Supported 00:19:58.934 Write Zeroes (08h): Supported LBA-Change 00:19:58.934 Dataset Management (09h): Supported LBA-Change 00:19:58.934 Copy (19h): Supported LBA-Change 00:19:58.934 00:19:58.934 Error Log 00:19:58.934 ========= 00:19:58.934 00:19:58.934 Arbitration 00:19:58.934 =========== 00:19:58.934 Arbitration Burst: 1 00:19:58.934 00:19:58.934 Power Management 00:19:58.934 ================ 00:19:58.934 Number of Power States: 1 00:19:58.934 Current Power State: Power State #0 00:19:58.934 Power State #0: 00:19:58.934 Max Power: 0.00 W 00:19:58.934 Non-Operational State: Operational 00:19:58.934 Entry Latency: Not Reported 00:19:58.934 Exit Latency: Not Reported 00:19:58.934 Relative Read Throughput: 0 00:19:58.934 Relative Read Latency: 0 00:19:58.934 Relative Write Throughput: 0 00:19:58.934 Relative Write Latency: 0 00:19:58.934 Idle Power: Not Reported 00:19:58.934 Active Power: Not Reported 00:19:58.934 Non-Operational Permissive Mode: Not Supported 00:19:58.934 00:19:58.934 Health Information 00:19:58.934 ================== 00:19:58.934 Critical Warnings: 00:19:58.934 Available Spare Space: OK 00:19:58.934 Temperature: OK 00:19:58.934 Device Reliability: OK 00:19:58.934 Read Only: No 00:19:58.934 Volatile Memory Backup: OK 00:19:58.934 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:58.934 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:58.934 Available Spare: 0% 00:19:58.934 Available Spare Threshold: 0% 00:19:58.934 Life Percentage Used:[2024-07-24 19:17:04.928625] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.934 [2024-07-24 19:17:04.928638] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x21b1400) 00:19:58.934 [2024-07-24 19:17:04.928651] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.934 [2024-07-24 19:17:04.928675] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2211e40, cid 7, qid 0 00:19:58.934 [2024-07-24 19:17:04.928784] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.934 [2024-07-24 19:17:04.928801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.934 [2024-07-24 19:17:04.928808] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.934 [2024-07-24 19:17:04.928816] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2211e40) on tqpair=0x21b1400 00:19:58.934 [2024-07-24 19:17:04.928864] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:19:58.934 [2024-07-24 19:17:04.928894] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22113c0) on tqpair=0x21b1400 00:19:58.934 [2024-07-24 19:17:04.928909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.934 [2024-07-24 19:17:04.928924] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2211540) on tqpair=0x21b1400 00:19:58.934 [2024-07-24 19:17:04.928934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.934 [2024-07-24 19:17:04.928943] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22116c0) on tqpair=0x21b1400 00:19:58.934 [2024-07-24 19:17:04.928952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.934 [2024-07-24 19:17:04.928963] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2211840) on tqpair=0x21b1400 00:19:58.934 [2024-07-24 19:17:04.928979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.934 [2024-07-24 19:17:04.928998] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.934 [2024-07-24 19:17:04.929008] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.934 [2024-07-24 19:17:04.929015] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21b1400) 00:19:58.934 [2024-07-24 19:17:04.929027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.934 [2024-07-24 19:17:04.929051] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2211840, cid 3, qid 0 00:19:58.934 [2024-07-24 19:17:04.929148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.934 [2024-07-24 19:17:04.929163] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.934 [2024-07-24 19:17:04.929171] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.934 [2024-07-24 19:17:04.929179] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2211840) on tqpair=0x21b1400 00:19:58.934 [2024-07-24 19:17:04.929191] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.934 [2024-07-24 19:17:04.929200] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.934 [2024-07-24 19:17:04.929207] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21b1400) 00:19:58.934 [2024-07-24 19:17:04.929224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.934 [2024-07-24 19:17:04.929256] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2211840, cid 3, qid 0 00:19:58.934 [2024-07-24 19:17:04.929367] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.934 [2024-07-24 19:17:04.929380] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.934 [2024-07-24 19:17:04.929388] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.934 [2024-07-24 19:17:04.929395] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2211840) on tqpair=0x21b1400 00:19:58.934 [2024-07-24 19:17:04.929404] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:19:58.934 [2024-07-24 19:17:04.929413] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:19:58.934 [2024-07-24 19:17:04.929430] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.934 [2024-07-24 19:17:04.929439] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.934 [2024-07-24 19:17:04.929447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21b1400) 00:19:58.934 [2024-07-24 19:17:04.929458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.934 [2024-07-24 19:17:04.929488] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2211840, cid 3, qid 0 00:19:58.934 [2024-07-24 19:17:04.929576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.934 [2024-07-24 19:17:04.929589] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.934 [2024-07-24 19:17:04.929596] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.934 [2024-07-24 19:17:04.929604] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2211840) on tqpair=0x21b1400 00:19:58.935 [2024-07-24 19:17:04.929622] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.935 [2024-07-24 19:17:04.929632] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.935 [2024-07-24 19:17:04.929639] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21b1400) 00:19:58.935 [2024-07-24 19:17:04.929651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.935 [2024-07-24 19:17:04.929673] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2211840, cid 3, qid 0 00:19:58.935 [2024-07-24 19:17:04.929769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.935 [2024-07-24 19:17:04.929784] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.935 [2024-07-24 19:17:04.929792] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.935 [2024-07-24 19:17:04.929799] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2211840) on tqpair=0x21b1400 00:19:58.935 [2024-07-24 19:17:04.929817] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.935 [2024-07-24 19:17:04.929828] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.935 [2024-07-24 19:17:04.929835] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21b1400) 00:19:58.935 [2024-07-24 19:17:04.929847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.935 [2024-07-24 19:17:04.929868] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2211840, cid 3, qid 0 00:19:58.935 [2024-07-24 19:17:04.929968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.935 [2024-07-24 19:17:04.929981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.935 [2024-07-24 19:17:04.929988] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.935 [2024-07-24 19:17:04.929996] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2211840) on tqpair=0x21b1400 00:19:58.935 [2024-07-24 19:17:04.930013] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.935 [2024-07-24 19:17:04.930023] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.935 [2024-07-24 19:17:04.930030] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21b1400) 00:19:58.935 [2024-07-24 19:17:04.930045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.935 [2024-07-24 19:17:04.930068] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2211840, cid 3, qid 0 00:19:58.935 [2024-07-24 19:17:04.933497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.935 [2024-07-24 19:17:04.933517] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.935 [2024-07-24 19:17:04.933525] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.935 [2024-07-24 19:17:04.933533] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2211840) on tqpair=0x21b1400 00:19:58.935 [2024-07-24 19:17:04.933554] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.935 [2024-07-24 19:17:04.933564] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.935 [2024-07-24 19:17:04.933572] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21b1400) 00:19:58.935 [2024-07-24 19:17:04.933584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.935 [2024-07-24 19:17:04.933608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2211840, cid 3, qid 0 00:19:59.195 [2024-07-24 19:17:04.933707] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.195 [2024-07-24 19:17:04.933721] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.195 [2024-07-24 19:17:04.933730] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.195 [2024-07-24 19:17:04.933738] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2211840) on tqpair=0x21b1400 00:19:59.195 [2024-07-24 19:17:04.933754] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:19:59.195 0% 00:19:59.195 Data Units Read: 0 00:19:59.195 Data Units Written: 0 00:19:59.195 Host Read Commands: 0 00:19:59.195 Host Write Commands: 0 00:19:59.195 Controller Busy Time: 0 minutes 00:19:59.195 Power Cycles: 0 00:19:59.195 Power On Hours: 0 hours 00:19:59.195 Unsafe Shutdowns: 0 00:19:59.195 Unrecoverable Media Errors: 0 00:19:59.195 Lifetime Error Log Entries: 0 00:19:59.195 Warning Temperature Time: 0 minutes 00:19:59.195 Critical Temperature Time: 0 minutes 00:19:59.195 00:19:59.195 Number of Queues 00:19:59.195 ================ 00:19:59.195 Number of I/O Submission Queues: 127 00:19:59.195 Number of I/O Completion Queues: 127 00:19:59.195 00:19:59.196 Active Namespaces 00:19:59.196 ================= 00:19:59.196 Namespace ID:1 00:19:59.196 Error Recovery Timeout: Unlimited 00:19:59.196 Command Set Identifier: NVM (00h) 00:19:59.196 Deallocate: Supported 00:19:59.196 Deallocated/Unwritten Error: Not Supported 00:19:59.196 Deallocated Read Value: Unknown 00:19:59.196 Deallocate in Write Zeroes: Not Supported 00:19:59.196 Deallocated Guard Field: 0xFFFF 00:19:59.196 Flush: Supported 00:19:59.196 Reservation: Supported 00:19:59.196 Namespace Sharing Capabilities: Multiple Controllers 00:19:59.196 Size (in LBAs): 131072 (0GiB) 00:19:59.196 Capacity (in LBAs): 131072 (0GiB) 00:19:59.196 Utilization (in LBAs): 131072 (0GiB) 00:19:59.196 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:59.196 EUI64: ABCDEF0123456789 00:19:59.196 UUID: bd1f651f-a790-4cf7-9a72-c764bc18874b 00:19:59.196 Thin Provisioning: Not Supported 00:19:59.196 Per-NS Atomic Units: Yes 00:19:59.196 Atomic Boundary Size (Normal): 0 00:19:59.196 Atomic Boundary Size (PFail): 0 00:19:59.196 Atomic Boundary Offset: 0 00:19:59.196 Maximum Single Source Range Length: 65535 00:19:59.196 Maximum Copy Length: 65535 00:19:59.196 Maximum Source Range Count: 1 00:19:59.196 NGUID/EUI64 Never Reused: No 00:19:59.196 Namespace Write Protected: No 00:19:59.196 Number of LBA Formats: 1 00:19:59.196 Current LBA Format: LBA Format #00 00:19:59.196 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:59.196 00:19:59.196 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:19:59.196 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:59.196 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.196 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:59.196 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.196 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:59.196 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:19:59.196 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:59.196 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:19:59.196 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:59.196 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:19:59.196 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:59.196 19:17:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:59.196 rmmod nvme_tcp 00:19:59.196 rmmod nvme_fabrics 00:19:59.196 rmmod nvme_keyring 00:19:59.196 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:59.196 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:19:59.196 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:19:59.196 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2603607 ']' 00:19:59.196 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2603607 00:19:59.196 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 2603607 ']' 00:19:59.196 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 2603607 00:19:59.196 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:19:59.196 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:59.196 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2603607 00:19:59.196 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:59.196 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:59.196 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2603607' 00:19:59.196 killing process with pid 2603607 00:19:59.196 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 2603607 00:19:59.196 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 2603607 00:19:59.457 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:59.457 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:59.457 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:59.457 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:59.457 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:59.457 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.457 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:59.457 19:17:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.368 19:17:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:01.368 00:20:01.368 real 0m4.930s 00:20:01.368 user 0m3.921s 00:20:01.368 sys 0m1.597s 00:20:01.368 19:17:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:01.368 19:17:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:01.368 ************************************ 00:20:01.368 END TEST nvmf_identify 00:20:01.368 ************************************ 00:20:01.368 19:17:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:01.368 19:17:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:01.368 19:17:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:01.368 19:17:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.627 ************************************ 00:20:01.627 START TEST nvmf_perf 00:20:01.627 ************************************ 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:01.627 * Looking for test storage... 00:20:01.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:01.627 19:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:03.535 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:03.535 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:03.535 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:03.535 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:03.535 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:03.535 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:03.535 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:03.535 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:03.535 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:03.535 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:03.535 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:03.535 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:03.535 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:03.535 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:03.535 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:03.536 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:03.536 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:03.536 Found net devices under 0000:08:00.0: cvl_0_0 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:03.536 Found net devices under 0000:08:00.1: cvl_0_1 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:03.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:20:03.536 00:20:03.536 --- 10.0.0.2 ping statistics --- 00:20:03.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.536 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:03.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:20:03.536 00:20:03.536 --- 10.0.0.1 ping statistics --- 00:20:03.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.536 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2605134 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2605134 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:03.536 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 2605134 ']' 00:20:03.537 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.537 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:03.537 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.537 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:03.537 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:03.537 [2024-07-24 19:17:09.267120] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:20:03.537 [2024-07-24 19:17:09.267225] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.537 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.537 [2024-07-24 19:17:09.334513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:03.537 [2024-07-24 19:17:09.451636] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.537 [2024-07-24 19:17:09.451693] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.537 [2024-07-24 19:17:09.451708] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.537 [2024-07-24 19:17:09.451722] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.537 [2024-07-24 19:17:09.451733] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.537 [2024-07-24 19:17:09.451844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.537 [2024-07-24 19:17:09.451934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.537 [2024-07-24 19:17:09.451989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.537 [2024-07-24 19:17:09.451993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.796 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:03.796 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:20:03.796 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:03.796 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:03.796 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:03.796 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.796 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:03.796 19:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:07.121 19:17:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:07.121 19:17:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:07.121 19:17:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:84:00.0 00:20:07.121 19:17:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:07.380 19:17:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:07.380 19:17:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:84:00.0 ']' 00:20:07.380 19:17:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:07.380 19:17:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:07.380 19:17:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:07.637 [2024-07-24 19:17:13.609725] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.637 19:17:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:07.894 19:17:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:07.894 19:17:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:08.151 19:17:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:08.151 19:17:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:08.409 19:17:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:08.666 [2024-07-24 19:17:14.609358] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.666 19:17:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:08.923 19:17:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:84:00.0 ']' 00:20:08.923 19:17:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:20:08.923 19:17:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:08.923 19:17:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:20:10.300 Initializing NVMe Controllers 00:20:10.300 Attached to NVMe Controller at 0000:84:00.0 [8086:0a54] 00:20:10.300 Associating PCIE (0000:84:00.0) NSID 1 with lcore 0 00:20:10.300 Initialization complete. Launching workers. 00:20:10.300 ======================================================== 00:20:10.300 Latency(us) 00:20:10.300 Device Information : IOPS MiB/s Average min max 00:20:10.300 PCIE (0000:84:00.0) NSID 1 from core 0: 65704.49 256.66 486.50 22.49 5372.71 00:20:10.300 ======================================================== 00:20:10.300 Total : 65704.49 256.66 486.50 22.49 5372.71 00:20:10.300 00:20:10.300 19:17:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:10.300 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.681 Initializing NVMe Controllers 00:20:11.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:11.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:11.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:11.681 Initialization complete. Launching workers. 00:20:11.681 ======================================================== 00:20:11.681 Latency(us) 00:20:11.681 Device Information : IOPS MiB/s Average min max 00:20:11.681 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.00 0.31 13045.07 188.70 45774.20 00:20:11.681 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 48.00 0.19 21280.25 7941.92 47897.00 00:20:11.681 ======================================================== 00:20:11.681 Total : 127.00 0.50 16157.58 188.70 47897.00 00:20:11.681 00:20:11.681 19:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:11.681 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.061 Initializing NVMe Controllers 00:20:13.061 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:13.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:13.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:13.061 Initialization complete. Launching workers. 00:20:13.061 ======================================================== 00:20:13.061 Latency(us) 00:20:13.061 Device Information : IOPS MiB/s Average min max 00:20:13.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7779.65 30.39 4114.45 867.39 7914.41 00:20:13.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3876.88 15.14 8280.35 4133.48 15919.73 00:20:13.061 ======================================================== 00:20:13.061 Total : 11656.53 45.53 5500.00 867.39 15919.73 00:20:13.061 00:20:13.061 19:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:13.061 19:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:13.061 19:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:13.061 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.599 Initializing NVMe Controllers 00:20:15.599 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:15.599 Controller IO queue size 128, less than required. 00:20:15.599 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:15.599 Controller IO queue size 128, less than required. 00:20:15.599 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:15.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:15.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:15.599 Initialization complete. Launching workers. 00:20:15.599 ======================================================== 00:20:15.599 Latency(us) 00:20:15.599 Device Information : IOPS MiB/s Average min max 00:20:15.599 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1347.97 336.99 97028.98 53440.39 152547.66 00:20:15.599 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 574.99 143.75 229405.26 72963.93 335819.68 00:20:15.599 ======================================================== 00:20:15.599 Total : 1922.96 480.74 136611.07 53440.39 335819.68 00:20:15.599 00:20:15.599 19:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:15.599 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.599 No valid NVMe controllers or AIO or URING devices found 00:20:15.599 Initializing NVMe Controllers 00:20:15.599 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:15.599 Controller IO queue size 128, less than required. 00:20:15.599 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:15.599 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:15.599 Controller IO queue size 128, less than required. 00:20:15.599 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:15.599 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:15.599 WARNING: Some requested NVMe devices were skipped 00:20:15.599 19:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:15.599 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.139 Initializing NVMe Controllers 00:20:18.139 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:18.139 Controller IO queue size 128, less than required. 00:20:18.139 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:18.139 Controller IO queue size 128, less than required. 00:20:18.139 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:18.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:18.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:18.139 Initialization complete. Launching workers. 00:20:18.139 00:20:18.139 ==================== 00:20:18.139 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:18.139 TCP transport: 00:20:18.139 polls: 12120 00:20:18.139 idle_polls: 6963 00:20:18.139 sock_completions: 5157 00:20:18.139 nvme_completions: 5235 00:20:18.139 submitted_requests: 7870 00:20:18.139 queued_requests: 1 00:20:18.139 00:20:18.139 ==================== 00:20:18.139 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:18.139 TCP transport: 00:20:18.139 polls: 11677 00:20:18.139 idle_polls: 7488 00:20:18.139 sock_completions: 4189 00:20:18.139 nvme_completions: 6257 00:20:18.139 submitted_requests: 9338 00:20:18.139 queued_requests: 1 00:20:18.139 ======================================================== 00:20:18.139 Latency(us) 00:20:18.139 Device Information : IOPS MiB/s Average min max 00:20:18.139 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1308.37 327.09 100673.47 66581.71 158095.72 00:20:18.139 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1563.85 390.96 82182.04 40514.54 124519.00 00:20:18.139 ======================================================== 00:20:18.139 Total : 2872.22 718.05 90605.38 40514.54 158095.72 00:20:18.139 00:20:18.139 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:18.139 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:18.707 rmmod nvme_tcp 00:20:18.707 rmmod nvme_fabrics 00:20:18.707 rmmod nvme_keyring 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2605134 ']' 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2605134 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 2605134 ']' 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 2605134 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2605134 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2605134' 00:20:18.707 killing process with pid 2605134 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 2605134 00:20:18.707 19:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 2605134 00:20:20.087 19:17:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:20.087 19:17:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:20.087 19:17:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:20.087 19:17:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:20.087 19:17:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:20.087 19:17:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.087 19:17:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:20.087 19:17:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:22.628 00:20:22.628 real 0m20.721s 00:20:22.628 user 1m5.312s 00:20:22.628 sys 0m4.745s 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:22.628 ************************************ 00:20:22.628 END TEST nvmf_perf 00:20:22.628 ************************************ 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.628 ************************************ 00:20:22.628 START TEST nvmf_fio_host 00:20:22.628 ************************************ 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:22.628 * Looking for test storage... 00:20:22.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.628 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:20:22.629 19:17:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:24.008 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:24.008 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:24.008 Found net devices under 0000:08:00.0: cvl_0_0 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.008 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:24.009 Found net devices under 0000:08:00.1: cvl_0_1 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:24.009 19:17:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:24.009 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:24.267 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:24.267 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:24.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:20:24.268 00:20:24.268 --- 10.0.0.2 ping statistics --- 00:20:24.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.268 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:24.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:20:24.268 00:20:24.268 --- 10.0.0.1 ping statistics --- 00:20:24.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.268 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2608181 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2608181 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 2608181 ']' 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:24.268 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.268 [2024-07-24 19:17:30.126072] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:20:24.268 [2024-07-24 19:17:30.126175] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.268 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.268 [2024-07-24 19:17:30.194135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:24.526 [2024-07-24 19:17:30.312332] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.527 [2024-07-24 19:17:30.312379] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.527 [2024-07-24 19:17:30.312402] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.527 [2024-07-24 19:17:30.312416] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.527 [2024-07-24 19:17:30.312428] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.527 [2024-07-24 19:17:30.312493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.527 [2024-07-24 19:17:30.312572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.527 [2024-07-24 19:17:30.312663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:24.527 [2024-07-24 19:17:30.312696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.527 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:24.527 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:20:24.527 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:24.785 [2024-07-24 19:17:30.705438] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.785 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:24.785 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:24.785 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.785 19:17:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:25.045 Malloc1 00:20:25.045 19:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:25.611 19:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:25.869 19:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:26.128 [2024-07-24 19:17:31.935600] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.128 19:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:26.386 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:26.386 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:26.386 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:26.386 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:26.386 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:26.386 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:26.386 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:26.386 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:26.386 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:26.386 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:26.386 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:26.386 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:26.386 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:26.386 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:26.386 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:26.386 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:26.386 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:26.386 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:26.386 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:26.386 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:26.386 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:26.387 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:26.387 19:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:26.646 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:26.646 fio-3.35 00:20:26.646 Starting 1 thread 00:20:26.646 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.185 00:20:29.185 test: (groupid=0, jobs=1): err= 0: pid=2608546: Wed Jul 24 19:17:34 2024 00:20:29.185 read: IOPS=7167, BW=28.0MiB/s (29.4MB/s)(56.2MiB/2007msec) 00:20:29.185 slat (usec): min=2, max=146, avg= 2.78, stdev= 1.82 00:20:29.185 clat (usec): min=3158, max=16920, avg=9838.47, stdev=786.26 00:20:29.185 lat (usec): min=3186, max=16922, avg=9841.25, stdev=786.16 00:20:29.185 clat percentiles (usec): 00:20:29.185 | 1.00th=[ 8029], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:20:29.185 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:20:29.185 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10683], 95.00th=[10945], 00:20:29.185 | 99.00th=[11469], 99.50th=[11731], 99.90th=[14877], 99.95th=[15926], 00:20:29.185 | 99.99th=[16188] 00:20:29.185 bw ( KiB/s): min=27568, max=29360, per=99.77%, avg=28606.00, stdev=797.23, samples=4 00:20:29.185 iops : min= 6892, max= 7340, avg=7151.50, stdev=199.31, samples=4 00:20:29.185 write: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(56.0MiB/2007msec); 0 zone resets 00:20:29.185 slat (usec): min=2, max=134, avg= 2.90, stdev= 1.36 00:20:29.185 clat (usec): min=1514, max=13801, avg=7989.69, stdev=650.85 00:20:29.185 lat (usec): min=1524, max=13803, avg=7992.59, stdev=650.83 00:20:29.185 clat percentiles (usec): 00:20:29.185 | 1.00th=[ 6521], 5.00th=[ 7046], 10.00th=[ 7242], 20.00th=[ 7504], 00:20:29.185 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 8029], 60.00th=[ 8160], 00:20:29.185 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:20:29.185 | 99.00th=[ 9372], 99.50th=[ 9503], 99.90th=[11600], 99.95th=[12649], 00:20:29.185 | 99.99th=[13829] 00:20:29.185 bw ( KiB/s): min=28464, max=28672, per=100.00%, avg=28598.00, stdev=98.50, samples=4 00:20:29.185 iops : min= 7116, max= 7168, avg=7149.50, stdev=24.62, samples=4 00:20:29.185 lat (msec) : 2=0.01%, 4=0.07%, 10=78.72%, 20=21.19% 00:20:29.185 cpu : usr=66.15%, sys=31.66%, ctx=63, majf=0, minf=40 00:20:29.185 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:29.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:29.185 issued rwts: total=14386,14343,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.185 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:29.185 00:20:29.185 Run status group 0 (all jobs): 00:20:29.185 READ: bw=28.0MiB/s (29.4MB/s), 28.0MiB/s-28.0MiB/s (29.4MB/s-29.4MB/s), io=56.2MiB (58.9MB), run=2007-2007msec 00:20:29.185 WRITE: bw=27.9MiB/s (29.3MB/s), 27.9MiB/s-27.9MiB/s (29.3MB/s-29.3MB/s), io=56.0MiB (58.7MB), run=2007-2007msec 00:20:29.185 19:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:29.186 19:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:29.186 19:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:29.186 19:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:29.186 19:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:29.186 19:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:29.186 19:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:29.186 19:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:29.186 19:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:29.186 19:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:29.186 19:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:29.186 19:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:29.186 19:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:29.186 19:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:29.186 19:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:29.186 19:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:29.186 19:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:29.186 19:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:29.186 19:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:29.186 19:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:29.186 19:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:29.186 19:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:29.186 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:29.186 fio-3.35 00:20:29.186 Starting 1 thread 00:20:29.186 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.717 00:20:31.717 test: (groupid=0, jobs=1): err= 0: pid=2608804: Wed Jul 24 19:17:37 2024 00:20:31.717 read: IOPS=7231, BW=113MiB/s (118MB/s)(227MiB/2010msec) 00:20:31.717 slat (usec): min=3, max=123, avg= 4.37, stdev= 1.67 00:20:31.717 clat (usec): min=2090, max=21251, avg=10280.96, stdev=2483.56 00:20:31.717 lat (usec): min=2095, max=21254, avg=10285.33, stdev=2483.63 00:20:31.717 clat percentiles (usec): 00:20:31.717 | 1.00th=[ 5211], 5.00th=[ 6521], 10.00th=[ 7373], 20.00th=[ 8356], 00:20:31.717 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[10159], 60.00th=[10683], 00:20:31.717 | 70.00th=[11338], 80.00th=[12256], 90.00th=[13566], 95.00th=[14615], 00:20:31.717 | 99.00th=[17171], 99.50th=[18744], 99.90th=[20317], 99.95th=[20841], 00:20:31.717 | 99.99th=[21103] 00:20:31.717 bw ( KiB/s): min=51136, max=69792, per=50.55%, avg=58488.00, stdev=8887.42, samples=4 00:20:31.717 iops : min= 3196, max= 4362, avg=3655.50, stdev=555.46, samples=4 00:20:31.717 write: IOPS=4193, BW=65.5MiB/s (68.7MB/s)(120MiB/1830msec); 0 zone resets 00:20:31.717 slat (usec): min=32, max=245, avg=39.34, stdev= 7.11 00:20:31.717 clat (usec): min=6517, max=25379, avg=13182.39, stdev=2342.69 00:20:31.717 lat (usec): min=6559, max=25412, avg=13221.72, stdev=2343.08 00:20:31.717 clat percentiles (usec): 00:20:31.717 | 1.00th=[ 8586], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11207], 00:20:31.717 | 30.00th=[11731], 40.00th=[12387], 50.00th=[12911], 60.00th=[13566], 00:20:31.717 | 70.00th=[14222], 80.00th=[15008], 90.00th=[16188], 95.00th=[17433], 00:20:31.717 | 99.00th=[19530], 99.50th=[20579], 99.90th=[24773], 99.95th=[25035], 00:20:31.717 | 99.99th=[25297] 00:20:31.717 bw ( KiB/s): min=53184, max=72224, per=90.71%, avg=60864.00, stdev=9220.26, samples=4 00:20:31.717 iops : min= 3324, max= 4514, avg=3804.00, stdev=576.27, samples=4 00:20:31.717 lat (msec) : 4=0.17%, 10=33.00%, 20=66.52%, 50=0.32% 00:20:31.717 cpu : usr=78.11%, sys=19.95%, ctx=38, majf=0, minf=58 00:20:31.717 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:20:31.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:31.717 issued rwts: total=14535,7674,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:31.717 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:31.717 00:20:31.717 Run status group 0 (all jobs): 00:20:31.717 READ: bw=113MiB/s (118MB/s), 113MiB/s-113MiB/s (118MB/s-118MB/s), io=227MiB (238MB), run=2010-2010msec 00:20:31.717 WRITE: bw=65.5MiB/s (68.7MB/s), 65.5MiB/s-65.5MiB/s (68.7MB/s-68.7MB/s), io=120MiB (126MB), run=1830-1830msec 00:20:31.717 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:31.717 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:31.717 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:31.717 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:31.718 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:31.718 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:31.718 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:20:31.718 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:31.718 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:20:31.718 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:31.718 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:31.718 rmmod nvme_tcp 00:20:31.718 rmmod nvme_fabrics 00:20:31.718 rmmod nvme_keyring 00:20:31.718 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:31.718 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:20:31.718 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:20:31.718 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2608181 ']' 00:20:31.718 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2608181 00:20:31.718 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 2608181 ']' 00:20:31.718 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 2608181 00:20:31.718 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:20:31.718 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:31.718 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2608181 00:20:31.718 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:31.718 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:31.718 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2608181' 00:20:31.718 killing process with pid 2608181 00:20:31.718 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 2608181 00:20:31.718 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 2608181 00:20:31.977 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:31.977 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:31.977 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:31.977 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:31.977 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:31.977 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.977 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:31.977 19:17:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.518 19:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:34.518 00:20:34.518 real 0m11.812s 00:20:34.518 user 0m35.575s 00:20:34.518 sys 0m3.590s 00:20:34.518 19:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:34.518 19:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.518 ************************************ 00:20:34.518 END TEST nvmf_fio_host 00:20:34.518 ************************************ 00:20:34.518 19:17:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:34.518 19:17:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:34.518 19:17:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.518 ************************************ 00:20:34.518 START TEST nvmf_failover 00:20:34.518 ************************************ 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:34.518 * Looking for test storage... 00:20:34.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:34.518 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:20:34.519 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:35.928 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:35.928 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:35.928 Found net devices under 0000:08:00.0: cvl_0_0 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:35.928 Found net devices under 0000:08:00.1: cvl_0_1 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:35.928 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:35.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:35.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:20:35.929 00:20:35.929 --- 10.0.0.2 ping statistics --- 00:20:35.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.929 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:35.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:35.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:20:35.929 00:20:35.929 --- 10.0.0.1 ping statistics --- 00:20:35.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.929 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2610498 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2610498 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2610498 ']' 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:35.929 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:35.929 [2024-07-24 19:17:41.886016] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:20:35.929 [2024-07-24 19:17:41.886112] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.929 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.189 [2024-07-24 19:17:41.954906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:36.189 [2024-07-24 19:17:42.074514] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.189 [2024-07-24 19:17:42.074584] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.189 [2024-07-24 19:17:42.074599] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.189 [2024-07-24 19:17:42.074612] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.189 [2024-07-24 19:17:42.074623] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.189 [2024-07-24 19:17:42.074710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.189 [2024-07-24 19:17:42.074762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:36.189 [2024-07-24 19:17:42.074766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.189 19:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:36.189 19:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:20:36.189 19:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:36.189 19:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:36.189 19:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:36.448 19:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.448 19:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:36.707 [2024-07-24 19:17:42.488981] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.707 19:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:36.964 Malloc0 00:20:36.964 19:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:37.222 19:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:37.488 19:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:37.745 [2024-07-24 19:17:43.625037] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.745 19:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:38.001 [2024-07-24 19:17:43.869760] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:38.001 19:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:38.259 [2024-07-24 19:17:44.114596] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:38.259 19:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2610728 00:20:38.259 19:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:38.259 19:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:38.259 19:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2610728 /var/tmp/bdevperf.sock 00:20:38.259 19:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2610728 ']' 00:20:38.259 19:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:38.259 19:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:38.259 19:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:38.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:38.259 19:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:38.259 19:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:38.516 19:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:38.516 19:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:20:38.516 19:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:38.774 NVMe0n1 00:20:38.774 19:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:39.340 00:20:39.340 19:17:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2610829 00:20:39.340 19:17:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:39.340 19:17:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:20:40.274 19:17:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:40.533 [2024-07-24 19:17:46.397993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c3580 is same with the state(5) to be set 00:20:40.533 19:17:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:20:43.821 19:17:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:44.080 00:20:44.080 19:17:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:44.339 19:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:20:47.628 19:17:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:47.628 [2024-07-24 19:17:53.481457] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.628 19:17:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:20:48.560 19:17:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:48.820 [2024-07-24 19:17:54.784450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c50d0 is same with the state(5) to be set 00:20:48.820 [2024-07-24 19:17:54.784529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c50d0 is same with the state(5) to be set 00:20:48.820 [2024-07-24 19:17:54.784546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c50d0 is same with the state(5) to be set 00:20:48.820 [2024-07-24 19:17:54.784560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c50d0 is same with the state(5) to be set 00:20:48.820 [2024-07-24 19:17:54.784573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c50d0 is same with the state(5) to be set 00:20:48.820 [2024-07-24 19:17:54.784586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c50d0 is same with the state(5) to be set 00:20:48.820 [2024-07-24 19:17:54.784599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c50d0 is same with the state(5) to be set 00:20:48.820 [2024-07-24 19:17:54.784614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c50d0 is same with the state(5) to be set 00:20:48.820 19:17:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2610829 00:20:55.397 0 00:20:55.397 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2610728 00:20:55.397 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2610728 ']' 00:20:55.397 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2610728 00:20:55.397 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:20:55.397 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:55.397 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2610728 00:20:55.397 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:55.397 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:55.397 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2610728' 00:20:55.397 killing process with pid 2610728 00:20:55.397 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2610728 00:20:55.397 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2610728 00:20:55.397 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:55.397 [2024-07-24 19:17:44.179230] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:20:55.397 [2024-07-24 19:17:44.179325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2610728 ] 00:20:55.397 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.397 [2024-07-24 19:17:44.233945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.397 [2024-07-24 19:17:44.351542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.397 Running I/O for 15 seconds... 00:20:55.397 [2024-07-24 19:17:46.398886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.397 [2024-07-24 19:17:46.398933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.397 [2024-07-24 19:17:46.398962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.397 [2024-07-24 19:17:46.398980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.397 [2024-07-24 19:17:46.398999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.397 [2024-07-24 19:17:46.399014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.397 [2024-07-24 19:17:46.399032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.397 [2024-07-24 19:17:46.399048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.397 [2024-07-24 19:17:46.399066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:69512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.397 [2024-07-24 19:17:46.399083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.397 [2024-07-24 19:17:46.399100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.397 [2024-07-24 19:17:46.399115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.397 [2024-07-24 19:17:46.399134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.397 [2024-07-24 19:17:46.399150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.397 [2024-07-24 19:17:46.399169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.397 [2024-07-24 19:17:46.399186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.397 [2024-07-24 19:17:46.399203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.397 [2024-07-24 19:17:46.399219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.397 [2024-07-24 19:17:46.399236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.397 [2024-07-24 19:17:46.399252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.397 [2024-07-24 19:17:46.399270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:69560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.397 [2024-07-24 19:17:46.399285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.397 [2024-07-24 19:17:46.399315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.397 [2024-07-24 19:17:46.399332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.397 [2024-07-24 19:17:46.399349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:69576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.397 [2024-07-24 19:17:46.399364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.397 [2024-07-24 19:17:46.399381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:69584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.397 [2024-07-24 19:17:46.399396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.397 [2024-07-24 19:17:46.399412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:69592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.397 [2024-07-24 19:17:46.399427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.397 [2024-07-24 19:17:46.399443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.397 [2024-07-24 19:17:46.399458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.397 [2024-07-24 19:17:46.399476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.397 [2024-07-24 19:17:46.399500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.397 [2024-07-24 19:17:46.399518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.397 [2024-07-24 19:17:46.399534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.397 [2024-07-24 19:17:46.399551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.397 [2024-07-24 19:17:46.399566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.397 [2024-07-24 19:17:46.399583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.397 [2024-07-24 19:17:46.399599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.397 [2024-07-24 19:17:46.399616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.397 [2024-07-24 19:17:46.399631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.397 [2024-07-24 19:17:46.399647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.397 [2024-07-24 19:17:46.399662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.399679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.398 [2024-07-24 19:17:46.399694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.399711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.398 [2024-07-24 19:17:46.399731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.399748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.398 [2024-07-24 19:17:46.399763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.399780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.398 [2024-07-24 19:17:46.399796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.399813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.398 [2024-07-24 19:17:46.399828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.399845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.398 [2024-07-24 19:17:46.399860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.399878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.398 [2024-07-24 19:17:46.399892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.399909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.398 [2024-07-24 19:17:46.399924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.399941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.398 [2024-07-24 19:17:46.399957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.399974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.399989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:69752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:69816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:69840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:69848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:69856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:69864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:69904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:69912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:69920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:69928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:69944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:69952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.398 [2024-07-24 19:17:46.400850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:69960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.398 [2024-07-24 19:17:46.400865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.400882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.400897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.400914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:69976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.400929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.400946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:69984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.400966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.400983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:69992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.400998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.401974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.401988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.402005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.402020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.402037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.402052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.402069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.399 [2024-07-24 19:17:46.402083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.399 [2024-07-24 19:17:46.402100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.400 [2024-07-24 19:17:46.402115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.402131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.400 [2024-07-24 19:17:46.402147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.402164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.400 [2024-07-24 19:17:46.402181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.402201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.400 [2024-07-24 19:17:46.402216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.402233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.400 [2024-07-24 19:17:46.402248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.402265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.400 [2024-07-24 19:17:46.402280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.402297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.400 [2024-07-24 19:17:46.402312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.402329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.400 [2024-07-24 19:17:46.402344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.402360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.400 [2024-07-24 19:17:46.402375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.402392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.400 [2024-07-24 19:17:46.402407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.402429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.400 [2024-07-24 19:17:46.402444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.402461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.400 [2024-07-24 19:17:46.402476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.402502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.400 [2024-07-24 19:17:46.402517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.402534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.400 [2024-07-24 19:17:46.402549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.402566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.400 [2024-07-24 19:17:46.402580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.402618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.400 [2024-07-24 19:17:46.402637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70384 len:8 PRP1 0x0 PRP2 0x0 00:20:55.400 [2024-07-24 19:17:46.402656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.402677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.400 [2024-07-24 19:17:46.402690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.400 [2024-07-24 19:17:46.402703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70392 len:8 PRP1 0x0 PRP2 0x0 00:20:55.400 [2024-07-24 19:17:46.402717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.402732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.400 [2024-07-24 19:17:46.402744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.400 [2024-07-24 19:17:46.402756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70400 len:8 PRP1 0x0 PRP2 0x0 00:20:55.400 [2024-07-24 19:17:46.402770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.402785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.400 [2024-07-24 19:17:46.402797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.400 [2024-07-24 19:17:46.402809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70408 len:8 PRP1 0x0 PRP2 0x0 00:20:55.400 [2024-07-24 19:17:46.402823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.402838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.400 [2024-07-24 19:17:46.402849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.400 [2024-07-24 19:17:46.402861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70416 len:8 PRP1 0x0 PRP2 0x0 00:20:55.400 [2024-07-24 19:17:46.402875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.402890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.400 [2024-07-24 19:17:46.402902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.400 [2024-07-24 19:17:46.402914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70424 len:8 PRP1 0x0 PRP2 0x0 00:20:55.400 [2024-07-24 19:17:46.402929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.402944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.400 [2024-07-24 19:17:46.402956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.400 [2024-07-24 19:17:46.402968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70432 len:8 PRP1 0x0 PRP2 0x0 00:20:55.400 [2024-07-24 19:17:46.402982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.402997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.400 [2024-07-24 19:17:46.403009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.400 [2024-07-24 19:17:46.403022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70440 len:8 PRP1 0x0 PRP2 0x0 00:20:55.400 [2024-07-24 19:17:46.403036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.403050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.400 [2024-07-24 19:17:46.403062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.400 [2024-07-24 19:17:46.403078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70448 len:8 PRP1 0x0 PRP2 0x0 00:20:55.400 [2024-07-24 19:17:46.403092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.403107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.400 [2024-07-24 19:17:46.403119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.400 [2024-07-24 19:17:46.403132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70456 len:8 PRP1 0x0 PRP2 0x0 00:20:55.400 [2024-07-24 19:17:46.403146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.403160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.400 [2024-07-24 19:17:46.403172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.400 [2024-07-24 19:17:46.403185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70464 len:8 PRP1 0x0 PRP2 0x0 00:20:55.400 [2024-07-24 19:17:46.403199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.403213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.400 [2024-07-24 19:17:46.403225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.400 [2024-07-24 19:17:46.403237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70472 len:8 PRP1 0x0 PRP2 0x0 00:20:55.400 [2024-07-24 19:17:46.403251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.403265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.400 [2024-07-24 19:17:46.403277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.400 [2024-07-24 19:17:46.403289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70480 len:8 PRP1 0x0 PRP2 0x0 00:20:55.400 [2024-07-24 19:17:46.403303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.400 [2024-07-24 19:17:46.403317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.400 [2024-07-24 19:17:46.403329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.401 [2024-07-24 19:17:46.403341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70488 len:8 PRP1 0x0 PRP2 0x0 00:20:55.401 [2024-07-24 19:17:46.403354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:46.403370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.401 [2024-07-24 19:17:46.403382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.401 [2024-07-24 19:17:46.403395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70496 len:8 PRP1 0x0 PRP2 0x0 00:20:55.401 [2024-07-24 19:17:46.403408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:46.403422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.401 [2024-07-24 19:17:46.403434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.401 [2024-07-24 19:17:46.403447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69728 len:8 PRP1 0x0 PRP2 0x0 00:20:55.401 [2024-07-24 19:17:46.403461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:46.403486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.401 [2024-07-24 19:17:46.403501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.401 [2024-07-24 19:17:46.403513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69736 len:8 PRP1 0x0 PRP2 0x0 00:20:55.401 [2024-07-24 19:17:46.403527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:46.403589] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1071980 was disconnected and freed. reset controller. 00:20:55.401 [2024-07-24 19:17:46.403617] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:20:55.401 [2024-07-24 19:17:46.403656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.401 [2024-07-24 19:17:46.403675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:46.403692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.401 [2024-07-24 19:17:46.403706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:46.403722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.401 [2024-07-24 19:17:46.403736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:46.403752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.401 [2024-07-24 19:17:46.403766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:46.403792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:55.401 [2024-07-24 19:17:46.407899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:55.401 [2024-07-24 19:17:46.407946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x104b430 (9): Bad file descriptor 00:20:55.401 [2024-07-24 19:17:46.485434] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:55.401 [2024-07-24 19:17:50.180461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.401 [2024-07-24 19:17:50.180539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:50.180559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.401 [2024-07-24 19:17:50.180574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:50.180590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.401 [2024-07-24 19:17:50.180604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:50.180620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.401 [2024-07-24 19:17:50.180634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:50.180648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b430 is same with the state(5) to be set 00:20:55.401 [2024-07-24 19:17:50.181297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.401 [2024-07-24 19:17:50.181335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:50.181364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.401 [2024-07-24 19:17:50.181382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:50.181400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.401 [2024-07-24 19:17:50.181416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:50.181433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.401 [2024-07-24 19:17:50.181448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:50.181465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.401 [2024-07-24 19:17:50.181487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:50.181506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.401 [2024-07-24 19:17:50.181521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:50.181539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.401 [2024-07-24 19:17:50.181554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:50.181570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.401 [2024-07-24 19:17:50.181585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:50.181602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.401 [2024-07-24 19:17:50.181617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:50.181634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.401 [2024-07-24 19:17:50.181648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:50.181665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.401 [2024-07-24 19:17:50.181680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:50.181697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.401 [2024-07-24 19:17:50.181712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:50.181729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.401 [2024-07-24 19:17:50.181745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:50.181766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.401 [2024-07-24 19:17:50.181782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:50.181799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.401 [2024-07-24 19:17:50.181814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:50.181832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.401 [2024-07-24 19:17:50.181848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.401 [2024-07-24 19:17:50.181865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.181880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.181897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.181912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.181929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.181943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.181961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.181976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.181993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:46384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.402 [2024-07-24 19:17:50.182299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.402 [2024-07-24 19:17:50.182944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.402 [2024-07-24 19:17:50.182959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.182976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.182991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.183968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.183985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.184000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.184017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.184032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.184049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.184064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.184080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.184095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.184112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.184127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.184144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.403 [2024-07-24 19:17:50.184159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.403 [2024-07-24 19:17:50.184175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:46928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:47016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:47056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.184907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.404 [2024-07-24 19:17:50.184939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.404 [2024-07-24 19:17:50.184971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.184988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.185003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.185020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.185035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.185054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.185070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.185086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:47120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.185102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.185118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.185133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.185150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.185164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.185181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.185196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.185213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.185227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.185244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:47160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.185258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.185275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.185290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.185307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.185321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.185337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.404 [2024-07-24 19:17:50.185353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.404 [2024-07-24 19:17:50.185370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:50.185384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:50.185401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:50.185416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:50.185432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:50.185451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:50.185492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.405 [2024-07-24 19:17:50.185510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.405 [2024-07-24 19:17:50.185525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47216 len:8 PRP1 0x0 PRP2 0x0 00:20:55.405 [2024-07-24 19:17:50.185539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:50.185602] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x106ac30 was disconnected and freed. reset controller. 00:20:55.405 [2024-07-24 19:17:50.185626] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:20:55.405 [2024-07-24 19:17:50.185643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:55.405 [2024-07-24 19:17:50.189794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:55.405 [2024-07-24 19:17:50.189841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x104b430 (9): Bad file descriptor 00:20:55.405 [2024-07-24 19:17:50.229388] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:55.405 [2024-07-24 19:17:54.786029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:67048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.786976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.786993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.405 [2024-07-24 19:17:54.787008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.787024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.405 [2024-07-24 19:17:54.787039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.787056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.405 [2024-07-24 19:17:54.787071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.405 [2024-07-24 19:17:54.787088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-24 19:17:54.787103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-24 19:17:54.787135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-24 19:17:54.787167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-24 19:17:54.787203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-24 19:17:54.787235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:55.406 [2024-07-24 19:17:54.787267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.787299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.787331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.787362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.787393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.787426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.787457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.787498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.787532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.787564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.787599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.787631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.787664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.787696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.787729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.787761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.787793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.787825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.787856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.787887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.787919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.787951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.787982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.787998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.788017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.788034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.788049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.406 [2024-07-24 19:17:54.788066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:67392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.406 [2024-07-24 19:17:54.788081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:67400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.788971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.788987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.789002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.789019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.789033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.789050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.789065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.789082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.789097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.789113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.789128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.789145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.789160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.789176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.789191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.789209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.789224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.789240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.789258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.789281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.407 [2024-07-24 19:17:54.789297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.407 [2024-07-24 19:17:54.789314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.408 [2024-07-24 19:17:54.789329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.789345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.408 [2024-07-24 19:17:54.789360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.789377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.408 [2024-07-24 19:17:54.789392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.789408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.408 [2024-07-24 19:17:54.789423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.789440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.408 [2024-07-24 19:17:54.789455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.789471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.408 [2024-07-24 19:17:54.789498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.789516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.408 [2024-07-24 19:17:54.789531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.789548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.408 [2024-07-24 19:17:54.789563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.789580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.408 [2024-07-24 19:17:54.789595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.789611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.408 [2024-07-24 19:17:54.789626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.789642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.408 [2024-07-24 19:17:54.789657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.789678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.408 [2024-07-24 19:17:54.789693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.789710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.408 [2024-07-24 19:17:54.789725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.789742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.408 [2024-07-24 19:17:54.789757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.789773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.408 [2024-07-24 19:17:54.789788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.789810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.408 [2024-07-24 19:17:54.789826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.789842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.408 [2024-07-24 19:17:54.789857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.789874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.408 [2024-07-24 19:17:54.789889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.789931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.408 [2024-07-24 19:17:54.789951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67840 len:8 PRP1 0x0 PRP2 0x0 00:20:55.408 [2024-07-24 19:17:54.789965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.789985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.408 [2024-07-24 19:17:54.789999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.408 [2024-07-24 19:17:54.790012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67848 len:8 PRP1 0x0 PRP2 0x0 00:20:55.408 [2024-07-24 19:17:54.790025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.790040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.408 [2024-07-24 19:17:54.790051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.408 [2024-07-24 19:17:54.790064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67856 len:8 PRP1 0x0 PRP2 0x0 00:20:55.408 [2024-07-24 19:17:54.790078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.790092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.408 [2024-07-24 19:17:54.790104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.408 [2024-07-24 19:17:54.790120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67864 len:8 PRP1 0x0 PRP2 0x0 00:20:55.408 [2024-07-24 19:17:54.790134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.790149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.408 [2024-07-24 19:17:54.790160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.408 [2024-07-24 19:17:54.790173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67872 len:8 PRP1 0x0 PRP2 0x0 00:20:55.408 [2024-07-24 19:17:54.790186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.790200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.408 [2024-07-24 19:17:54.790213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.408 [2024-07-24 19:17:54.790225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67880 len:8 PRP1 0x0 PRP2 0x0 00:20:55.408 [2024-07-24 19:17:54.790238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.790252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.408 [2024-07-24 19:17:54.790264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.408 [2024-07-24 19:17:54.790276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67888 len:8 PRP1 0x0 PRP2 0x0 00:20:55.408 [2024-07-24 19:17:54.790292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.790307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.408 [2024-07-24 19:17:54.790319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.408 [2024-07-24 19:17:54.790331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67896 len:8 PRP1 0x0 PRP2 0x0 00:20:55.408 [2024-07-24 19:17:54.790345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.790359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.408 [2024-07-24 19:17:54.790371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.408 [2024-07-24 19:17:54.790383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67904 len:8 PRP1 0x0 PRP2 0x0 00:20:55.408 [2024-07-24 19:17:54.790397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.790411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.408 [2024-07-24 19:17:54.790423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.408 [2024-07-24 19:17:54.790436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67912 len:8 PRP1 0x0 PRP2 0x0 00:20:55.408 [2024-07-24 19:17:54.790450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.790464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.408 [2024-07-24 19:17:54.790476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.408 [2024-07-24 19:17:54.790496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67920 len:8 PRP1 0x0 PRP2 0x0 00:20:55.408 [2024-07-24 19:17:54.790511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.408 [2024-07-24 19:17:54.790525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:55.408 [2024-07-24 19:17:54.790540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:55.408 [2024-07-24 19:17:54.790553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67928 len:8 PRP1 0x0 PRP2 0x0 00:20:55.408 [2024-07-24 19:17:54.790567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.409 [2024-07-24 19:17:54.790631] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x107b210 was disconnected and freed. reset controller. 00:20:55.409 [2024-07-24 19:17:54.790654] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:20:55.409 [2024-07-24 19:17:54.790694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.409 [2024-07-24 19:17:54.790713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.409 [2024-07-24 19:17:54.790731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.409 [2024-07-24 19:17:54.790745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.409 [2024-07-24 19:17:54.790761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.409 [2024-07-24 19:17:54.790775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.409 [2024-07-24 19:17:54.790790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.409 [2024-07-24 19:17:54.790804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:55.409 [2024-07-24 19:17:54.790818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:55.409 [2024-07-24 19:17:54.790889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x104b430 (9): Bad file descriptor 00:20:55.409 [2024-07-24 19:17:54.794932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:55.409 [2024-07-24 19:17:54.964701] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:55.409 00:20:55.409 Latency(us) 00:20:55.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.409 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:55.409 Verification LBA range: start 0x0 length 0x4000 00:20:55.409 NVMe0n1 : 15.01 7386.95 28.86 585.65 0.00 16021.02 649.29 21845.33 00:20:55.409 =================================================================================================================== 00:20:55.409 Total : 7386.95 28.86 585.65 0.00 16021.02 649.29 21845.33 00:20:55.409 Received shutdown signal, test time was about 15.000000 seconds 00:20:55.409 00:20:55.409 Latency(us) 00:20:55.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.409 =================================================================================================================== 00:20:55.409 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:55.409 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:55.409 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:20:55.409 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:20:55.409 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2612261 00:20:55.409 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:20:55.409 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2612261 /var/tmp/bdevperf.sock 00:20:55.409 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2612261 ']' 00:20:55.409 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:55.409 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:55.409 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:55.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:55.409 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:55.409 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:55.409 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:55.409 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:20:55.409 19:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:55.409 [2024-07-24 19:18:01.127564] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:55.409 19:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:55.667 [2024-07-24 19:18:01.424380] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:55.667 19:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:55.926 NVMe0n1 00:20:55.926 19:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:56.493 00:20:56.493 19:18:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:56.751 00:20:56.751 19:18:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:56.751 19:18:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:20:57.009 19:18:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:57.267 19:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:00.559 19:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:00.559 19:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:00.559 19:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2612855 00:21:00.559 19:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:00.559 19:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2612855 00:21:01.937 0 00:21:01.937 19:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:01.937 [2024-07-24 19:18:00.569818] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:21:01.937 [2024-07-24 19:18:00.569917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2612261 ] 00:21:01.937 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.937 [2024-07-24 19:18:00.631980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.937 [2024-07-24 19:18:00.749101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.937 [2024-07-24 19:18:03.184273] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:01.937 [2024-07-24 19:18:03.184376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.937 [2024-07-24 19:18:03.184408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.937 [2024-07-24 19:18:03.184427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.937 [2024-07-24 19:18:03.184442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.937 [2024-07-24 19:18:03.184466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.937 [2024-07-24 19:18:03.184487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.937 [2024-07-24 19:18:03.184504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.937 [2024-07-24 19:18:03.184518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.937 [2024-07-24 19:18:03.184534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.937 [2024-07-24 19:18:03.184590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.937 [2024-07-24 19:18:03.184626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f2430 (9): Bad file descriptor 00:21:01.937 [2024-07-24 19:18:03.190116] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:01.937 Running I/O for 1 seconds... 00:21:01.937 00:21:01.937 Latency(us) 00:21:01.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.937 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:01.937 Verification LBA range: start 0x0 length 0x4000 00:21:01.937 NVMe0n1 : 1.01 7276.20 28.42 0.00 0.00 17503.88 3470.98 18058.81 00:21:01.937 =================================================================================================================== 00:21:01.937 Total : 7276.20 28.42 0.00 0.00 17503.88 3470.98 18058.81 00:21:01.937 19:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:01.937 19:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:02.195 19:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:02.453 19:18:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:02.453 19:18:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:02.712 19:18:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:02.970 19:18:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:06.285 19:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:06.285 19:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:06.285 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2612261 00:21:06.285 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2612261 ']' 00:21:06.285 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2612261 00:21:06.285 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:21:06.285 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:06.285 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2612261 00:21:06.285 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:06.285 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:06.285 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2612261' 00:21:06.285 killing process with pid 2612261 00:21:06.285 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2612261 00:21:06.285 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2612261 00:21:06.576 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:06.576 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:06.834 rmmod nvme_tcp 00:21:06.834 rmmod nvme_fabrics 00:21:06.834 rmmod nvme_keyring 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2610498 ']' 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2610498 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2610498 ']' 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2610498 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2610498 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2610498' 00:21:06.834 killing process with pid 2610498 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2610498 00:21:06.834 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2610498 00:21:07.093 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:07.093 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:07.093 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:07.093 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:07.093 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:07.093 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.093 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.093 19:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:09.637 00:21:09.637 real 0m35.015s 00:21:09.637 user 2m4.292s 00:21:09.637 sys 0m5.907s 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:09.637 ************************************ 00:21:09.637 END TEST nvmf_failover 00:21:09.637 ************************************ 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.637 ************************************ 00:21:09.637 START TEST nvmf_host_discovery 00:21:09.637 ************************************ 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:09.637 * Looking for test storage... 00:21:09.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:09.637 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:09.638 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:21:11.059 Found 0000:08:00.0 (0x8086 - 0x159b) 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:21:11.059 Found 0000:08:00.1 (0x8086 - 0x159b) 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:21:11.059 Found net devices under 0000:08:00.0: cvl_0_0 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:21:11.059 Found net devices under 0000:08:00.1: cvl_0_1 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.059 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:11.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:21:11.060 00:21:11.060 --- 10.0.0.2 ping statistics --- 00:21:11.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.060 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:21:11.060 00:21:11.060 --- 10.0.0.1 ping statistics --- 00:21:11.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.060 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2615457 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2615457 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2615457 ']' 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:11.060 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.060 [2024-07-24 19:18:16.925751] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:21:11.060 [2024-07-24 19:18:16.925851] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.060 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.060 [2024-07-24 19:18:16.995447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.319 [2024-07-24 19:18:17.113314] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.319 [2024-07-24 19:18:17.113378] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.319 [2024-07-24 19:18:17.113394] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.319 [2024-07-24 19:18:17.113407] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.319 [2024-07-24 19:18:17.113419] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.319 [2024-07-24 19:18:17.113450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.319 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:11.319 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:21:11.319 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:11.319 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:11.319 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.319 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.319 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:11.319 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.319 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.319 [2024-07-24 19:18:17.250323] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.320 [2024-07-24 19:18:17.258504] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.320 null0 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.320 null1 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2615488 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2615488 /tmp/host.sock 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2615488 ']' 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:11.320 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:11.320 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.579 [2024-07-24 19:18:17.337520] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:21:11.579 [2024-07-24 19:18:17.337609] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2615488 ] 00:21:11.579 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.579 [2024-07-24 19:18:17.398908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.579 [2024-07-24 19:18:17.515699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:11.837 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:11.838 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.096 [2024-07-24 19:18:17.908206] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.096 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.097 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:12.097 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:12.097 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.097 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:12.097 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:12.097 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:12.097 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:12.097 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:12.097 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:12.097 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:12.097 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:12.097 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:12.097 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:12.097 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.097 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:12.097 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:21:12.097 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:21:12.665 [2024-07-24 19:18:18.637958] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:12.665 [2024-07-24 19:18:18.637989] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:12.665 [2024-07-24 19:18:18.638015] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:12.924 [2024-07-24 19:18:18.724299] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:13.182 [2024-07-24 19:18:18.950396] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:13.182 [2024-07-24 19:18:18.950425] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:13.182 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:13.182 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:13.182 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:13.182 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:13.182 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:13.182 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.182 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.182 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:13.182 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:13.183 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.440 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:21:13.440 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:13.440 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:13.440 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:13.440 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:13.440 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:13.440 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:13.440 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:13.440 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:13.440 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:13.440 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.441 [2024-07-24 19:18:19.364500] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:13.441 [2024-07-24 19:18:19.365608] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:13.441 [2024-07-24 19:18:19.365648] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:13.441 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:13.699 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:13.699 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:13.699 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:13.699 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:13.699 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:13.699 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:13.699 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:13.699 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:13.699 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.699 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:13.699 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.699 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:13.699 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.699 [2024-07-24 19:18:19.493577] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:13.699 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:13.699 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:21:13.959 [2024-07-24 19:18:19.756953] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:13.959 [2024-07-24 19:18:19.756978] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:13.959 [2024-07-24 19:18:19.756990] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:14.527 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:14.527 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:14.527 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:14.527 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:14.527 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:14.527 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.527 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:14.527 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.527 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:14.527 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.785 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:14.785 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:14.785 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:14.785 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:14.785 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:14.785 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:14.785 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:14.785 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:14.785 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:14.785 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.786 [2024-07-24 19:18:20.596580] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:14.786 [2024-07-24 19:18:20.596637] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:14.786 [2024-07-24 19:18:20.602332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.786 [2024-07-24 19:18:20.602369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.786 [2024-07-24 19:18:20.602388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.786 [2024-07-24 19:18:20.602405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.786 [2024-07-24 19:18:20.602422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.786 [2024-07-24 19:18:20.602438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.786 [2024-07-24 19:18:20.602454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.786 [2024-07-24 19:18:20.602469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.786 [2024-07-24 19:18:20.602493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c3ee0 is same with the state(5) to be set 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:14.786 [2024-07-24 19:18:20.612338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c3ee0 (9): Bad file descriptor 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.786 [2024-07-24 19:18:20.622388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:14.786 [2024-07-24 19:18:20.622640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.786 [2024-07-24 19:18:20.622673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c3ee0 with addr=10.0.0.2, port=4420 00:21:14.786 [2024-07-24 19:18:20.622693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c3ee0 is same with the state(5) to be set 00:21:14.786 [2024-07-24 19:18:20.622719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c3ee0 (9): Bad file descriptor 00:21:14.786 [2024-07-24 19:18:20.622756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:14.786 [2024-07-24 19:18:20.622775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:14.786 [2024-07-24 19:18:20.622794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:14.786 [2024-07-24 19:18:20.622817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:14.786 [2024-07-24 19:18:20.632476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:14.786 [2024-07-24 19:18:20.632700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.786 [2024-07-24 19:18:20.632741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c3ee0 with addr=10.0.0.2, port=4420 00:21:14.786 [2024-07-24 19:18:20.632771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c3ee0 is same with the state(5) to be set 00:21:14.786 [2024-07-24 19:18:20.632800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c3ee0 (9): Bad file descriptor 00:21:14.786 [2024-07-24 19:18:20.632823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:14.786 [2024-07-24 19:18:20.632838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:14.786 [2024-07-24 19:18:20.632853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:14.786 [2024-07-24 19:18:20.632876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:14.786 [2024-07-24 19:18:20.642562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:14.786 [2024-07-24 19:18:20.642743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.786 [2024-07-24 19:18:20.642773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c3ee0 with addr=10.0.0.2, port=4420 00:21:14.786 [2024-07-24 19:18:20.642791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c3ee0 is same with the state(5) to be set 00:21:14.786 [2024-07-24 19:18:20.642816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c3ee0 (9): Bad file descriptor 00:21:14.786 [2024-07-24 19:18:20.642851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:14.786 [2024-07-24 19:18:20.642869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:14.786 [2024-07-24 19:18:20.642885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:14.786 [2024-07-24 19:18:20.642907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:14.786 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:14.786 [2024-07-24 19:18:20.652646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:14.786 [2024-07-24 19:18:20.652824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.786 [2024-07-24 19:18:20.652854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c3ee0 with addr=10.0.0.2, port=4420 00:21:14.786 [2024-07-24 19:18:20.652872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c3ee0 is same with the state(5) to be set 00:21:14.786 [2024-07-24 19:18:20.652903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c3ee0 (9): Bad file descriptor 00:21:14.786 [2024-07-24 19:18:20.652939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:14.787 [2024-07-24 19:18:20.652958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:14.787 [2024-07-24 19:18:20.652974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:14.787 [2024-07-24 19:18:20.652996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:14.787 [2024-07-24 19:18:20.662728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:14.787 [2024-07-24 19:18:20.662894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.787 [2024-07-24 19:18:20.662922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c3ee0 with addr=10.0.0.2, port=4420 00:21:14.787 [2024-07-24 19:18:20.662939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c3ee0 is same with the state(5) to be set 00:21:14.787 [2024-07-24 19:18:20.662964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c3ee0 (9): Bad file descriptor 00:21:14.787 [2024-07-24 19:18:20.663010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:14.787 [2024-07-24 19:18:20.663029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:14.787 [2024-07-24 19:18:20.663045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:14.787 [2024-07-24 19:18:20.663066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:14.787 [2024-07-24 19:18:20.672806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:14.787 [2024-07-24 19:18:20.672939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.787 [2024-07-24 19:18:20.672968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c3ee0 with addr=10.0.0.2, port=4420 00:21:14.787 [2024-07-24 19:18:20.672986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c3ee0 is same with the state(5) to be set 00:21:14.787 [2024-07-24 19:18:20.673010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c3ee0 (9): Bad file descriptor 00:21:14.787 [2024-07-24 19:18:20.673033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:14.787 [2024-07-24 19:18:20.673047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:14.787 [2024-07-24 19:18:20.673062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:14.787 [2024-07-24 19:18:20.673096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.787 [2024-07-24 19:18:20.682883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:14.787 [2024-07-24 19:18:20.683037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.787 [2024-07-24 19:18:20.683066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c3ee0 with addr=10.0.0.2, port=4420 00:21:14.787 [2024-07-24 19:18:20.683084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c3ee0 is same with the state(5) to be set 00:21:14.787 [2024-07-24 19:18:20.683108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c3ee0 (9): Bad file descriptor 00:21:14.787 [2024-07-24 19:18:20.683175] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:14.787 [2024-07-24 19:18:20.683210] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:14.787 [2024-07-24 19:18:20.683248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:14.787 [2024-07-24 19:18:20.683269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:14.787 [2024-07-24 19:18:20.683285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:14.787 [2024-07-24 19:18:20.683309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:14.787 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:15.045 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:15.046 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:15.046 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:15.046 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:15.046 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:15.046 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:15.046 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.046 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.046 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.046 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:15.046 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:15.046 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:15.046 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:15.046 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:15.046 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.046 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:15.981 [2024-07-24 19:18:21.976633] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:15.981 [2024-07-24 19:18:21.976662] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:15.981 [2024-07-24 19:18:21.976688] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:16.239 [2024-07-24 19:18:22.062966] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:16.239 [2024-07-24 19:18:22.172086] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:16.239 [2024-07-24 19:18:22.172128] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:16.239 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.239 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:16.239 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:16.239 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:16.239 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:16.239 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.239 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:16.239 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.239 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:16.239 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.239 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.239 request: 00:21:16.239 { 00:21:16.239 "name": "nvme", 00:21:16.239 "trtype": "tcp", 00:21:16.239 "traddr": "10.0.0.2", 00:21:16.239 "adrfam": "ipv4", 00:21:16.239 "trsvcid": "8009", 00:21:16.239 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:16.239 "wait_for_attach": true, 00:21:16.239 "method": "bdev_nvme_start_discovery", 00:21:16.239 "req_id": 1 00:21:16.239 } 00:21:16.239 Got JSON-RPC error response 00:21:16.239 response: 00:21:16.239 { 00:21:16.239 "code": -17, 00:21:16.239 "message": "File exists" 00:21:16.239 } 00:21:16.240 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:16.240 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:16.240 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:16.240 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:16.240 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:16.240 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:16.240 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:16.240 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:16.240 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.240 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.240 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:16.240 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:16.240 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.240 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:16.240 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:16.240 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:16.240 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:16.240 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.240 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.240 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:16.240 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:16.499 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.499 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:16.499 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:16.499 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:16.499 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:16.499 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:16.499 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.499 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:16.499 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.499 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:16.499 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.499 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.499 request: 00:21:16.499 { 00:21:16.499 "name": "nvme_second", 00:21:16.499 "trtype": "tcp", 00:21:16.499 "traddr": "10.0.0.2", 00:21:16.499 "adrfam": "ipv4", 00:21:16.499 "trsvcid": "8009", 00:21:16.499 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:16.499 "wait_for_attach": true, 00:21:16.499 "method": "bdev_nvme_start_discovery", 00:21:16.499 "req_id": 1 00:21:16.499 } 00:21:16.499 Got JSON-RPC error response 00:21:16.500 response: 00:21:16.500 { 00:21:16.500 "code": -17, 00:21:16.500 "message": "File exists" 00:21:16.500 } 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.500 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.437 [2024-07-24 19:18:23.391557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:17.437 [2024-07-24 19:18:23.391627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f6660 with addr=10.0.0.2, port=8010 00:21:17.437 [2024-07-24 19:18:23.391658] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:17.437 [2024-07-24 19:18:23.391676] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:17.437 [2024-07-24 19:18:23.391691] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:18.819 [2024-07-24 19:18:24.394010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.819 [2024-07-24 19:18:24.394091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f6660 with addr=10.0.0.2, port=8010 00:21:18.819 [2024-07-24 19:18:24.394121] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:18.819 [2024-07-24 19:18:24.394139] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:18.819 [2024-07-24 19:18:24.394154] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:19.389 [2024-07-24 19:18:25.396184] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:19.389 request: 00:21:19.389 { 00:21:19.389 "name": "nvme_second", 00:21:19.389 "trtype": "tcp", 00:21:19.389 "traddr": "10.0.0.2", 00:21:19.389 "adrfam": "ipv4", 00:21:19.389 "trsvcid": "8010", 00:21:19.389 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:19.389 "wait_for_attach": false, 00:21:19.389 "attach_timeout_ms": 3000, 00:21:19.389 "method": "bdev_nvme_start_discovery", 00:21:19.389 "req_id": 1 00:21:19.389 } 00:21:19.389 Got JSON-RPC error response 00:21:19.389 response: 00:21:19.389 { 00:21:19.389 "code": -110, 00:21:19.389 "message": "Connection timed out" 00:21:19.389 } 00:21:19.389 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:19.390 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:19.390 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:19.390 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:19.390 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:19.390 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:19.390 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:19.390 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:19.390 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.390 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.390 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:19.649 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:19.649 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.649 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:19.649 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:19.649 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2615488 00:21:19.649 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:19.649 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:19.649 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:21:19.649 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:19.649 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:21:19.649 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:19.649 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:19.649 rmmod nvme_tcp 00:21:19.649 rmmod nvme_fabrics 00:21:19.649 rmmod nvme_keyring 00:21:19.649 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:19.649 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:21:19.649 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:21:19.649 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2615457 ']' 00:21:19.649 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2615457 00:21:19.649 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 2615457 ']' 00:21:19.649 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 2615457 00:21:19.650 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:21:19.650 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:19.650 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2615457 00:21:19.650 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:19.650 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:19.650 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2615457' 00:21:19.650 killing process with pid 2615457 00:21:19.650 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 2615457 00:21:19.650 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 2615457 00:21:19.910 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:19.910 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:19.910 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:19.910 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:19.910 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:19.910 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.910 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.910 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.819 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:21.819 00:21:21.819 real 0m12.713s 00:21:21.819 user 0m18.991s 00:21:21.819 sys 0m2.442s 00:21:21.819 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:21.819 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.819 ************************************ 00:21:21.819 END TEST nvmf_host_discovery 00:21:21.819 ************************************ 00:21:21.819 19:18:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:21.819 19:18:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:21.819 19:18:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:21.819 19:18:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.078 ************************************ 00:21:22.078 START TEST nvmf_host_multipath_status 00:21:22.078 ************************************ 00:21:22.078 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:22.078 * Looking for test storage... 00:21:22.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:22.078 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.078 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:22.078 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.078 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.078 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.078 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.078 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.078 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.078 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.078 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.078 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:21:22.079 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:21:23.986 Found 0000:08:00.0 (0x8086 - 0x159b) 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:21:23.986 Found 0000:08:00.1 (0x8086 - 0x159b) 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:23.986 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:21:23.987 Found net devices under 0000:08:00.0: cvl_0_0 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:21:23.987 Found net devices under 0000:08:00.1: cvl_0_1 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:23.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:21:23.987 00:21:23.987 --- 10.0.0.2 ping statistics --- 00:21:23.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.987 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:23.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:21:23.987 00:21:23.987 --- 10.0.0.1 ping statistics --- 00:21:23.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.987 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2617858 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2617858 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2617858 ']' 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:23.987 [2024-07-24 19:18:29.696257] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:21:23.987 [2024-07-24 19:18:29.696354] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.987 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.987 [2024-07-24 19:18:29.762718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:23.987 [2024-07-24 19:18:29.883654] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.987 [2024-07-24 19:18:29.883718] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.987 [2024-07-24 19:18:29.883734] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.987 [2024-07-24 19:18:29.883747] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.987 [2024-07-24 19:18:29.883759] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.987 [2024-07-24 19:18:29.883858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.987 [2024-07-24 19:18:29.883895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:23.987 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:24.246 19:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.246 19:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2617858 00:21:24.246 19:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:24.504 [2024-07-24 19:18:30.286112] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.504 19:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:24.762 Malloc0 00:21:24.762 19:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:25.020 19:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:25.278 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:25.536 [2024-07-24 19:18:31.380303] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.536 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:25.794 [2024-07-24 19:18:31.612984] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:25.794 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2618079 00:21:25.794 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:25.794 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:25.794 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2618079 /var/tmp/bdevperf.sock 00:21:25.794 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2618079 ']' 00:21:25.794 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:25.794 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:25.794 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:25.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:25.794 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:25.794 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:26.052 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:26.053 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:21:26.053 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:26.311 19:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:26.881 Nvme0n1 00:21:26.881 19:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:27.140 Nvme0n1 00:21:27.140 19:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:21:27.140 19:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:29.676 19:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:29.676 19:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:29.676 19:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:29.936 19:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:21:30.876 19:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:30.876 19:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:30.876 19:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.876 19:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:31.135 19:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:31.135 19:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:31.135 19:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.135 19:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:31.393 19:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:31.393 19:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:31.393 19:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.393 19:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:31.651 19:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:31.651 19:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:31.652 19:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.652 19:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:31.910 19:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:31.910 19:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:31.910 19:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.910 19:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:32.168 19:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:32.168 19:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:32.168 19:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.168 19:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:32.426 19:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:32.426 19:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:21:32.426 19:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:32.685 19:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:32.944 19:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:21:33.881 19:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:21:33.881 19:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:33.881 19:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.881 19:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:34.451 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:34.451 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:34.451 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:34.451 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:34.710 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:34.710 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:34.710 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:34.710 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:34.968 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:34.968 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:34.968 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:34.968 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:35.226 19:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:35.226 19:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:35.226 19:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:35.226 19:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:35.483 19:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:35.483 19:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:35.483 19:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:35.483 19:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:35.742 19:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:35.742 19:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:21:35.742 19:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:36.000 19:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:21:36.567 19:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:21:37.506 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:21:37.506 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:37.506 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:37.506 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:37.765 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:37.765 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:37.765 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:37.765 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:38.022 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:38.022 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:38.022 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:38.022 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:38.279 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:38.279 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:38.279 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:38.280 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:38.537 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:38.537 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:38.537 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:38.538 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:38.795 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:38.795 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:38.795 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:38.795 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:39.364 19:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:39.364 19:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:21:39.364 19:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:39.623 19:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:39.882 19:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:21:40.820 19:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:21:40.820 19:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:40.820 19:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:40.820 19:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:41.078 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:41.078 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:41.078 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:41.078 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:41.337 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:41.337 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:41.337 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:41.337 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:41.595 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:41.595 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:41.595 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:41.595 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:41.852 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:41.852 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:41.852 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:41.852 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:42.110 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:42.110 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:42.110 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.110 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:42.367 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:42.367 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:21:42.367 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:42.625 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:42.884 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:21:43.822 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:21:43.822 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:43.822 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:43.822 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:44.080 19:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:44.080 19:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:44.080 19:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:44.080 19:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:44.646 19:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:44.646 19:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:44.646 19:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:44.646 19:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:44.904 19:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:44.904 19:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:44.904 19:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:44.904 19:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:45.162 19:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:45.162 19:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:45.162 19:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.162 19:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:45.419 19:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:45.419 19:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:45.419 19:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.419 19:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:45.677 19:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:45.677 19:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:21:45.677 19:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:45.934 19:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:46.193 19:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:21:47.571 19:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:21:47.571 19:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:47.571 19:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:47.571 19:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:47.571 19:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:47.571 19:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:47.571 19:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:47.571 19:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:47.831 19:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:47.831 19:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:47.831 19:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:47.831 19:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:48.121 19:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:48.121 19:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:48.121 19:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.121 19:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:48.412 19:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:48.412 19:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:48.412 19:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.412 19:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:48.986 19:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:48.986 19:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:48.986 19:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.986 19:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:49.244 19:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.244 19:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:21:49.502 19:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:21:49.502 19:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:49.760 19:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:50.019 19:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:21:50.956 19:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:21:50.956 19:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:50.956 19:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:50.956 19:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:51.215 19:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:51.215 19:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:51.215 19:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.215 19:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:51.473 19:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:51.473 19:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:51.473 19:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.473 19:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:51.731 19:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:51.731 19:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:51.731 19:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.731 19:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:52.298 19:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:52.298 19:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:52.298 19:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:52.298 19:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:52.556 19:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:52.556 19:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:52.556 19:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:52.556 19:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:52.815 19:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:52.815 19:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:21:52.815 19:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:53.073 19:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:53.333 19:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:21:54.269 19:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:21:54.269 19:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:54.269 19:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:54.269 19:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:54.528 19:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:54.528 19:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:54.528 19:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:54.528 19:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:55.094 19:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:55.094 19:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:55.094 19:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:55.094 19:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:55.352 19:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:55.352 19:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:55.352 19:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:55.352 19:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:55.611 19:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:55.611 19:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:55.611 19:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:55.611 19:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:55.869 19:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:55.869 19:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:55.869 19:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:55.869 19:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:56.127 19:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:56.127 19:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:21:56.127 19:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:56.385 19:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:21:56.642 19:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:21:58.016 19:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:21:58.016 19:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:58.016 19:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.016 19:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:58.016 19:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:58.016 19:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:58.016 19:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.017 19:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:58.274 19:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:58.274 19:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:58.274 19:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.274 19:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:58.532 19:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:58.532 19:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:58.532 19:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.532 19:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:58.790 19:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:58.790 19:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:58.790 19:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.790 19:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:59.048 19:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.048 19:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:59.048 19:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.048 19:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:59.306 19:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.306 19:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:21:59.306 19:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:59.564 19:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:59.823 19:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:00.758 19:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:00.758 19:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:00.758 19:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:00.758 19:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:01.016 19:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.016 19:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:01.016 19:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.016 19:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:01.274 19:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:01.274 19:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:01.274 19:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.274 19:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:01.843 19:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.843 19:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:01.843 19:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.843 19:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:01.843 19:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.843 19:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:01.843 19:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.843 19:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:02.409 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:02.409 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:02.409 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:02.409 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:02.409 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:02.409 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2618079 00:22:02.409 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2618079 ']' 00:22:02.409 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2618079 00:22:02.409 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:22:02.409 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:02.409 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2618079 00:22:02.691 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:02.691 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:02.691 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2618079' 00:22:02.691 killing process with pid 2618079 00:22:02.691 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2618079 00:22:02.691 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2618079 00:22:02.691 Connection closed with partial response: 00:22:02.691 00:22:02.691 00:22:02.691 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2618079 00:22:02.691 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:02.691 [2024-07-24 19:18:31.677067] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:22:02.691 [2024-07-24 19:18:31.677160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2618079 ] 00:22:02.691 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.691 [2024-07-24 19:18:31.732014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.691 [2024-07-24 19:18:31.849237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.691 Running I/O for 90 seconds... 00:22:02.691 [2024-07-24 19:18:48.514089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.691 [2024-07-24 19:18:48.514160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:02.691 [2024-07-24 19:18:48.514199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.691 [2024-07-24 19:18:48.514219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:02.691 [2024-07-24 19:18:48.514245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.691 [2024-07-24 19:18:48.514262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:02.691 [2024-07-24 19:18:48.514287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.691 [2024-07-24 19:18:48.514304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:02.691 [2024-07-24 19:18:48.514329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.691 [2024-07-24 19:18:48.514346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:02.691 [2024-07-24 19:18:48.514370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.691 [2024-07-24 19:18:48.514387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:02.691 [2024-07-24 19:18:48.514411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.691 [2024-07-24 19:18:48.514428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:02.691 [2024-07-24 19:18:48.514453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.691 [2024-07-24 19:18:48.514470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:02.691 [2024-07-24 19:18:48.514502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.691 [2024-07-24 19:18:48.514521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:02.691 [2024-07-24 19:18:48.514545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.691 [2024-07-24 19:18:48.514563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:02.691 [2024-07-24 19:18:48.514587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.691 [2024-07-24 19:18:48.514617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:02.691 [2024-07-24 19:18:48.514642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.691 [2024-07-24 19:18:48.514659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:02.691 [2024-07-24 19:18:48.514684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.691 [2024-07-24 19:18:48.514701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:02.691 [2024-07-24 19:18:48.514725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.691 [2024-07-24 19:18:48.514742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:02.691 [2024-07-24 19:18:48.514766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.691 [2024-07-24 19:18:48.514783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.691 [2024-07-24 19:18:48.514807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.691 [2024-07-24 19:18:48.514824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.691 [2024-07-24 19:18:48.514848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.691 [2024-07-24 19:18:48.514865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.691 [2024-07-24 19:18:48.514890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.691 [2024-07-24 19:18:48.514907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:02.691 [2024-07-24 19:18:48.514931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.691 [2024-07-24 19:18:48.514948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:02.691 [2024-07-24 19:18:48.514972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.691 [2024-07-24 19:18:48.514989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:02.691 [2024-07-24 19:18:48.515013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.691 [2024-07-24 19:18:48.515030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:02.691 [2024-07-24 19:18:48.515054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.692 [2024-07-24 19:18:48.515071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.515096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.692 [2024-07-24 19:18:48.515113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.515142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.692 [2024-07-24 19:18:48.515160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.515184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.692 [2024-07-24 19:18:48.515201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.515226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.515244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.515269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.515286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.515310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.515327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.515352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.515369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.515393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.515410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.515434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.515451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.515475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.515501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.515526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.515543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.515568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.515585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.515610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.515626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.515655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.515673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.515697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.515714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.515739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.515756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.515780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.515797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.515822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.515839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.515864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.515881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.516726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.516751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.516782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.516801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.516826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.516843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.516868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.516885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.516909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.516927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.516951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.516968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.516993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.517015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.517040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.517064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.517089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.692 [2024-07-24 19:18:48.517106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.517137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.692 [2024-07-24 19:18:48.517153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.517178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.692 [2024-07-24 19:18:48.517195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.517219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.692 [2024-07-24 19:18:48.517236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.517260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.692 [2024-07-24 19:18:48.517277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.517301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.692 [2024-07-24 19:18:48.517318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.517349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.692 [2024-07-24 19:18:48.517366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.517390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.692 [2024-07-24 19:18:48.517413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:02.692 [2024-07-24 19:18:48.517437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.692 [2024-07-24 19:18:48.517454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.517486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.517506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.517531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.517553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.517578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.517595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.517620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.517636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.517662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.517679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.517704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.517721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.517745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.517762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.517786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.517803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.517827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.517845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.517869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.517886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.517910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.517926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.517950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.517967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.517992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.518009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.518054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.518096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.518137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.518178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.518219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.518259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.518300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.518342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.518383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.693 [2024-07-24 19:18:48.518425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.693 [2024-07-24 19:18:48.518466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.693 [2024-07-24 19:18:48.518514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.693 [2024-07-24 19:18:48.518556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.693 [2024-07-24 19:18:48.518602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.693 [2024-07-24 19:18:48.518643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.693 [2024-07-24 19:18:48.518684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.693 [2024-07-24 19:18:48.518725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.693 [2024-07-24 19:18:48.518766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.693 [2024-07-24 19:18:48.518813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.693 [2024-07-24 19:18:48.518854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.693 [2024-07-24 19:18:48.518895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.693 [2024-07-24 19:18:48.518936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:02.693 [2024-07-24 19:18:48.518960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.693 [2024-07-24 19:18:48.518977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.519002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.694 [2024-07-24 19:18:48.519019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.519725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.694 [2024-07-24 19:18:48.519754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.519793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.519813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.519838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.519856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.519880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.519897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.519921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.519939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.519967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.519993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.520963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.520979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.521004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.521021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.521045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.521062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.521087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.521104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.521129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.521146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.521170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.521187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.521211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.694 [2024-07-24 19:18:48.521228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:02.694 [2024-07-24 19:18:48.521253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.521270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.521294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.521311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.521335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.521353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.521381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.521411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.521436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.521453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.521478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.521504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.521529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.521546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.521571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.521588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.521612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.521630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.521654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.521671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.521695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.521713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.521737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.521754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.521779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.521796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.521820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.521838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.521862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.521879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.521908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.521925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.521950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.521967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.521991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.522008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.522032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.522049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.522073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.522091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.522115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.522132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.522156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.522173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.522198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.695 [2024-07-24 19:18:48.522215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.522239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.695 [2024-07-24 19:18:48.522256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.522280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.695 [2024-07-24 19:18:48.522298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.522322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.695 [2024-07-24 19:18:48.522339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.522363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.695 [2024-07-24 19:18:48.522380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.522404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.695 [2024-07-24 19:18:48.522428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.522454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.695 [2024-07-24 19:18:48.522471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.522503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.695 [2024-07-24 19:18:48.522521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.522546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.695 [2024-07-24 19:18:48.522563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.522587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.695 [2024-07-24 19:18:48.522604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.522628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.695 [2024-07-24 19:18:48.522645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:02.695 [2024-07-24 19:18:48.522669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.696 [2024-07-24 19:18:48.522686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.522710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.696 [2024-07-24 19:18:48.522727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.522752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.696 [2024-07-24 19:18:48.522768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.522792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.696 [2024-07-24 19:18:48.522809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.522834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.696 [2024-07-24 19:18:48.522851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.523862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.696 [2024-07-24 19:18:48.523888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.523918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.696 [2024-07-24 19:18:48.523941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.523967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.696 [2024-07-24 19:18:48.523985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.696 [2024-07-24 19:18:48.524026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.696 [2024-07-24 19:18:48.524068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.696 [2024-07-24 19:18:48.524109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.696 [2024-07-24 19:18:48.524151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.696 [2024-07-24 19:18:48.524193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.696 [2024-07-24 19:18:48.524234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.696 [2024-07-24 19:18:48.524276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.696 [2024-07-24 19:18:48.524322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.696 [2024-07-24 19:18:48.524365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.696 [2024-07-24 19:18:48.524407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.696 [2024-07-24 19:18:48.524448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.696 [2024-07-24 19:18:48.524503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.696 [2024-07-24 19:18:48.524546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.696 [2024-07-24 19:18:48.524587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.696 [2024-07-24 19:18:48.524629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.696 [2024-07-24 19:18:48.524670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.696 [2024-07-24 19:18:48.524711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.696 [2024-07-24 19:18:48.524753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.696 [2024-07-24 19:18:48.524794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.696 [2024-07-24 19:18:48.524837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.696 [2024-07-24 19:18:48.524879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.696 [2024-07-24 19:18:48.524919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.696 [2024-07-24 19:18:48.524961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.524989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.696 [2024-07-24 19:18:48.525007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:02.696 [2024-07-24 19:18:48.525031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.697 [2024-07-24 19:18:48.525048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.525073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.697 [2024-07-24 19:18:48.525090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.525114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.697 [2024-07-24 19:18:48.525138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.525162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.697 [2024-07-24 19:18:48.525179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.525204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.697 [2024-07-24 19:18:48.525220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.525244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.697 [2024-07-24 19:18:48.525261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.525286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.697 [2024-07-24 19:18:48.525302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.525326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.697 [2024-07-24 19:18:48.525343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.525367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.697 [2024-07-24 19:18:48.525385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.525409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.697 [2024-07-24 19:18:48.525426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.525450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.697 [2024-07-24 19:18:48.525467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.525497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.697 [2024-07-24 19:18:48.525520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.525545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.697 [2024-07-24 19:18:48.525562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.525586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.697 [2024-07-24 19:18:48.525603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.525628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.697 [2024-07-24 19:18:48.525645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.525669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.697 [2024-07-24 19:18:48.525686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.525711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.697 [2024-07-24 19:18:48.525728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.525752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.697 [2024-07-24 19:18:48.525769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.525794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.697 [2024-07-24 19:18:48.525810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.525835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.697 [2024-07-24 19:18:48.525852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.525876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.697 [2024-07-24 19:18:48.525893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.525923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.697 [2024-07-24 19:18:48.525940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.525965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.697 [2024-07-24 19:18:48.525982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.526006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.697 [2024-07-24 19:18:48.526028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.526052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.697 [2024-07-24 19:18:48.526069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.526094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.697 [2024-07-24 19:18:48.526111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.526136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.697 [2024-07-24 19:18:48.526153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.526873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.697 [2024-07-24 19:18:48.526900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.526930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.697 [2024-07-24 19:18:48.526949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.526975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.697 [2024-07-24 19:18:48.526992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.527016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.697 [2024-07-24 19:18:48.527033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.527057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.697 [2024-07-24 19:18:48.527074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.527099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.697 [2024-07-24 19:18:48.527116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.527140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.697 [2024-07-24 19:18:48.527157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.527181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.697 [2024-07-24 19:18:48.527198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.527222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.697 [2024-07-24 19:18:48.527240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:02.697 [2024-07-24 19:18:48.527269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.697 [2024-07-24 19:18:48.527287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.527312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.527329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.527354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.527371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.527395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.527412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.527437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.527453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.527477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.527503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.527528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.527546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.527571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.527588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.527612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.527630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.527654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.527671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.527695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.527712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.527736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.527754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.527782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.527800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.527824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.527841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.527866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.527882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.527906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.527923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.527947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.527964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.527988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.528006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.528030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.528047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.528071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.528089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.528113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.528130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.528154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.528170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.528195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.528212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.528236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.528254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.528278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.528299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.528324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.528341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.528365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.528382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.528406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.528423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.528447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.528464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.528494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.528513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.528538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.528555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.528579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.528596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.528620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.528637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.528661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.528678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.528702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.528719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.528743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.528760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.528784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.528805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:02.698 [2024-07-24 19:18:48.528830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.698 [2024-07-24 19:18:48.528848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.528872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.699 [2024-07-24 19:18:48.528889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.528913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.699 [2024-07-24 19:18:48.528930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.528955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.699 [2024-07-24 19:18:48.528972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.528996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.699 [2024-07-24 19:18:48.529013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.529037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.699 [2024-07-24 19:18:48.529054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.529077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.699 [2024-07-24 19:18:48.529094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.529118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.699 [2024-07-24 19:18:48.529135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.529160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.699 [2024-07-24 19:18:48.529176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.529200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.699 [2024-07-24 19:18:48.529217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.529241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.699 [2024-07-24 19:18:48.529258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.529282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.699 [2024-07-24 19:18:48.529299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.529327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.699 [2024-07-24 19:18:48.529345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.529370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.529387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.529411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.529428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.529453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.529470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.529503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.529522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.529546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.529563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.529588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.529605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.529629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.529646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.529671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.529687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.529712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.529728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.529753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.529770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.529794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.529811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.529839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.529857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.529881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.529898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.529923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.529940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.530912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.530936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.530965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.530984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.531008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.531026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.531050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.531067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.531092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.531109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.531133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.531151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.531175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.531192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.531217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.531234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.531258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.531275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.531299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.699 [2024-07-24 19:18:48.531321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.699 [2024-07-24 19:18:48.531346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.700 [2024-07-24 19:18:48.531364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.531388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.531405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.531430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.531446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.531470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.531495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.531521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.531538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.531562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.531579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.531604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.531621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.531645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.531662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.531686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.531703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.531727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.531744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.531768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.531785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.531809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.531831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.531855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.531873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.531898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.531916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.531940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.531957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.531981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.531998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.532022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.532039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.532063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.532080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.532104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.532121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.532145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.532162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.532185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.532202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.532226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.532243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.532268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.532284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.532308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.532325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.532353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.532371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.532395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.532412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.532437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.532454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.532478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.532502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.532527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.532545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.532569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.532586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.532610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.700 [2024-07-24 19:18:48.532627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.700 [2024-07-24 19:18:48.532651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.701 [2024-07-24 19:18:48.532668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.532693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.701 [2024-07-24 19:18:48.532710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.532734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.701 [2024-07-24 19:18:48.532751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.532775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.701 [2024-07-24 19:18:48.532792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.532816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.701 [2024-07-24 19:18:48.532832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.532864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.701 [2024-07-24 19:18:48.532881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.532906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.701 [2024-07-24 19:18:48.532923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.532947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.701 [2024-07-24 19:18:48.532964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.532988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.701 [2024-07-24 19:18:48.533005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.533029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.701 [2024-07-24 19:18:48.533046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.533070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.701 [2024-07-24 19:18:48.533087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.533112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.701 [2024-07-24 19:18:48.533129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.533154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.701 [2024-07-24 19:18:48.533171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.533925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.701 [2024-07-24 19:18:48.533954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.533984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.701 [2024-07-24 19:18:48.534003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.701 [2024-07-24 19:18:48.534045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.701 [2024-07-24 19:18:48.534087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.701 [2024-07-24 19:18:48.534134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.701 [2024-07-24 19:18:48.534177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.701 [2024-07-24 19:18:48.534218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.701 [2024-07-24 19:18:48.534260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.701 [2024-07-24 19:18:48.534301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.701 [2024-07-24 19:18:48.534343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.701 [2024-07-24 19:18:48.534384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.701 [2024-07-24 19:18:48.534426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.701 [2024-07-24 19:18:48.534468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.701 [2024-07-24 19:18:48.534522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.701 [2024-07-24 19:18:48.534564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.701 [2024-07-24 19:18:48.534605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.701 [2024-07-24 19:18:48.534653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.701 [2024-07-24 19:18:48.534698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.701 [2024-07-24 19:18:48.534741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.701 [2024-07-24 19:18:48.534782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.701 [2024-07-24 19:18:48.534824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.701 [2024-07-24 19:18:48.534864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.701 [2024-07-24 19:18:48.534906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.701 [2024-07-24 19:18:48.534947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:02.701 [2024-07-24 19:18:48.534972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.534989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.535971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.535995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.536012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.536037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.536053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.536078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.536095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.536119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.536135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.536159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.536176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.536201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.536222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.536247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.536264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.536289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.536306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.536331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.536347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.536371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.536388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.536412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.536430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.536454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.702 [2024-07-24 19:18:48.536471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:02.702 [2024-07-24 19:18:48.536503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.702 [2024-07-24 19:18:48.536521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.536546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.536563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.536587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.536604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.536629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.536646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.536671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.536689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.536713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.536734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.536759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.536776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.536801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.536818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.536842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.536859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.536883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.536900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.536924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.536942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.536967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.536986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.537012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.537030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.537959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.537983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.538034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.538077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.538118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.538160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.538207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.538248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.538290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.538331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.538372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.538414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.703 [2024-07-24 19:18:48.538456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.703 [2024-07-24 19:18:48.538508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.703 [2024-07-24 19:18:48.538550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.703 [2024-07-24 19:18:48.538592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.703 [2024-07-24 19:18:48.538633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.703 [2024-07-24 19:18:48.538674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.703 [2024-07-24 19:18:48.538722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.703 [2024-07-24 19:18:48.538764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.703 [2024-07-24 19:18:48.538806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.703 [2024-07-24 19:18:48.538848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.703 [2024-07-24 19:18:48.538889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.703 [2024-07-24 19:18:48.538930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.703 [2024-07-24 19:18:48.538971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:02.703 [2024-07-24 19:18:48.538996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.704 [2024-07-24 19:18:48.539013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.704 [2024-07-24 19:18:48.539055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.704 [2024-07-24 19:18:48.539096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.704 [2024-07-24 19:18:48.539137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.704 [2024-07-24 19:18:48.539178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.704 [2024-07-24 19:18:48.539224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.704 [2024-07-24 19:18:48.539267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.704 [2024-07-24 19:18:48.539308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.704 [2024-07-24 19:18:48.539355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.704 [2024-07-24 19:18:48.539398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.704 [2024-07-24 19:18:48.539439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.704 [2024-07-24 19:18:48.539486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.704 [2024-07-24 19:18:48.539530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.704 [2024-07-24 19:18:48.539572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.704 [2024-07-24 19:18:48.539613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.704 [2024-07-24 19:18:48.539654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.704 [2024-07-24 19:18:48.539695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.704 [2024-07-24 19:18:48.539741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.704 [2024-07-24 19:18:48.539783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.704 [2024-07-24 19:18:48.539825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.704 [2024-07-24 19:18:48.539866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.704 [2024-07-24 19:18:48.539907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.704 [2024-07-24 19:18:48.539949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.539973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.704 [2024-07-24 19:18:48.539990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.540014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.704 [2024-07-24 19:18:48.540033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.540057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.704 [2024-07-24 19:18:48.540074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.540098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.704 [2024-07-24 19:18:48.540115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.540139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.704 [2024-07-24 19:18:48.540156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.540181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.704 [2024-07-24 19:18:48.540198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.540222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.704 [2024-07-24 19:18:48.540243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.540938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.704 [2024-07-24 19:18:48.540966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.540996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.704 [2024-07-24 19:18:48.541015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.541040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.704 [2024-07-24 19:18:48.541057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.541081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.704 [2024-07-24 19:18:48.541102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.541127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.704 [2024-07-24 19:18:48.541145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.541169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.704 [2024-07-24 19:18:48.541186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:02.704 [2024-07-24 19:18:48.541211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.704 [2024-07-24 19:18:48.541228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.541252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.541269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.541293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.541310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.541335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.541352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.541376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.541393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.541417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.541434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.541464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.541492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.541520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.541538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.541562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.541579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.541604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.541621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.541646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.541663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.541687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.541705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.541729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.541746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.541771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.541789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.541814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.541831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.541857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.541873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.541899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.541916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.541940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.541957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.541986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.542003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.542028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.542046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.542070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.542087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.542111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.542128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.542153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.542170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.542194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.542211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.542235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.542253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.542277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.542294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.542318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.542335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.542360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.542377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.542401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.542418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.542442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.542459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.542489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.542512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:02.705 [2024-07-24 19:18:48.542537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.705 [2024-07-24 19:18:48.542554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.542578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.542596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.542620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.542637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.542661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.542678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.542703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.542720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.542750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.542768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.542792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.542809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.542834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.542851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.542875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.542892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.542917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.542934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.542959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.542976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.543023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.543067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.543110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.543152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.543193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.543234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.543276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.543317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.543358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.543399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.543447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.543497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.706 [2024-07-24 19:18:48.543540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.706 [2024-07-24 19:18:48.543587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.706 [2024-07-24 19:18:48.543629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.706 [2024-07-24 19:18:48.543671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.706 [2024-07-24 19:18:48.543713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.706 [2024-07-24 19:18:48.543754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.706 [2024-07-24 19:18:48.543796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.706 [2024-07-24 19:18:48.543838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.706 [2024-07-24 19:18:48.543879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.706 [2024-07-24 19:18:48.543920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.706 [2024-07-24 19:18:48.543962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.543986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.706 [2024-07-24 19:18:48.544004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.544029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.706 [2024-07-24 19:18:48.544047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.545007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.706 [2024-07-24 19:18:48.545035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:02.706 [2024-07-24 19:18:48.545066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.706 [2024-07-24 19:18:48.545085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.545111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.707 [2024-07-24 19:18:48.545129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.545154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.707 [2024-07-24 19:18:48.545171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.545195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.707 [2024-07-24 19:18:48.545212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.545238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.707 [2024-07-24 19:18:48.545255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.545279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.707 [2024-07-24 19:18:48.545296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.545320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.707 [2024-07-24 19:18:48.545338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.545362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.707 [2024-07-24 19:18:48.545379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.545403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.707 [2024-07-24 19:18:48.545420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.545445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.707 [2024-07-24 19:18:48.545462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.545495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.707 [2024-07-24 19:18:48.545514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.545539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.707 [2024-07-24 19:18:48.545561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.545587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.545604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.545628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.545645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.545669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.545687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.545711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.545727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.545751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.545768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.545793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.545810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.545835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.545852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.545876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.545894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.545918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.545935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.545959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.545976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.546000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.546017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.546042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.546062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.546088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.546105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.546129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.546151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.546175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.546192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.546216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.546233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.546258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.546275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.546300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.546317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.546341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.546358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.546383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.546400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.546424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.546441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.546466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.546490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.546516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.546534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.546558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.546575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.546604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.707 [2024-07-24 19:18:48.546621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:02.707 [2024-07-24 19:18:48.546646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.708 [2024-07-24 19:18:48.546663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.546687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.708 [2024-07-24 19:18:48.546704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.546728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.708 [2024-07-24 19:18:48.546745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.546775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.708 [2024-07-24 19:18:48.546793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.546817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.708 [2024-07-24 19:18:48.546834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.546860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.708 [2024-07-24 19:18:48.546877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.546902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.708 [2024-07-24 19:18:48.546919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.546944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.708 [2024-07-24 19:18:48.546961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.546985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.708 [2024-07-24 19:18:48.547002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.547027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.708 [2024-07-24 19:18:48.547044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.547068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.708 [2024-07-24 19:18:48.547085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.547113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.708 [2024-07-24 19:18:48.547131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.547155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.708 [2024-07-24 19:18:48.547172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.547197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.708 [2024-07-24 19:18:48.547214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.547238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.708 [2024-07-24 19:18:48.547255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.547280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.708 [2024-07-24 19:18:48.547297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.547989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.708 [2024-07-24 19:18:48.548013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.548046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.708 [2024-07-24 19:18:48.548065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.548090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.708 [2024-07-24 19:18:48.548108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.548133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.708 [2024-07-24 19:18:48.548150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.548174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.708 [2024-07-24 19:18:48.548191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.548215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.708 [2024-07-24 19:18:48.548232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.548257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.708 [2024-07-24 19:18:48.548274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.548298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.708 [2024-07-24 19:18:48.548319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.548344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.708 [2024-07-24 19:18:48.548361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.548386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.708 [2024-07-24 19:18:48.548403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.548427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.708 [2024-07-24 19:18:48.548444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.548468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.708 [2024-07-24 19:18:48.548493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.548519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.708 [2024-07-24 19:18:48.548536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.548560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.708 [2024-07-24 19:18:48.548577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.548602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.708 [2024-07-24 19:18:48.548618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.548643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.708 [2024-07-24 19:18:48.548659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.548684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.708 [2024-07-24 19:18:48.548701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.548725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.708 [2024-07-24 19:18:48.548742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.548766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.708 [2024-07-24 19:18:48.548783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.548808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.708 [2024-07-24 19:18:48.548829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:02.708 [2024-07-24 19:18:48.548855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.708 [2024-07-24 19:18:48.548871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.548896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.548913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.548937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.548954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.548978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.548995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.549972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.549996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.550013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.550038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.550055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.550080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.550096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.550121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.550137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.550162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.550179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.550203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.550220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.550244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.550261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.550285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.550302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.550326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.550343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.550367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.550392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:02.709 [2024-07-24 19:18:48.550417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.709 [2024-07-24 19:18:48.550435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.550459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.710 [2024-07-24 19:18:48.550476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.550510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.710 [2024-07-24 19:18:48.550528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.550553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.710 [2024-07-24 19:18:48.550570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.550594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.710 [2024-07-24 19:18:48.550611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.550636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.550653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.550677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.550694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.550718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.550735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.550759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.550777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.550801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.550818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.550848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.550865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.550890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.550912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.550938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.550955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.550979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.550996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.551020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.551037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.551062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.551080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.552016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.552044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.552075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.552093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.552118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.552135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.552160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.552177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.552201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.552218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.552243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.552259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.552283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.552301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.552325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.552342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.552371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.552388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.552413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.552430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.552456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.552474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.552506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.552525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.552550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.552567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.552592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.710 [2024-07-24 19:18:48.552610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.552635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.710 [2024-07-24 19:18:48.552653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:02.710 [2024-07-24 19:18:48.552677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.710 [2024-07-24 19:18:48.552694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.552718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.552736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.552760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.552777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.552802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.552819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.552843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.552860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.552890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.552909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.552933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.552950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.552974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.552991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.553032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.553073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.553114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.553156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.553197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.553239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.553280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.553321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.553363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.553411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.553453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.553502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.553545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.553586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.553627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.553668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.553709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.553751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.553791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.553833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.553874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.711 [2024-07-24 19:18:48.553919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.711 [2024-07-24 19:18:48.553962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.553986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.711 [2024-07-24 19:18:48.554003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.554027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.711 [2024-07-24 19:18:48.554045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.554071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.711 [2024-07-24 19:18:48.554088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.554112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.711 [2024-07-24 19:18:48.554129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.554153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.711 [2024-07-24 19:18:48.554175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.554200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.711 [2024-07-24 19:18:48.554217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:02.711 [2024-07-24 19:18:48.554241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.712 [2024-07-24 19:18:48.554258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.554282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.712 [2024-07-24 19:18:48.554300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.712 [2024-07-24 19:18:48.555050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.712 [2024-07-24 19:18:48.555100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.712 [2024-07-24 19:18:48.555142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.712 [2024-07-24 19:18:48.555189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.712 [2024-07-24 19:18:48.555232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.712 [2024-07-24 19:18:48.555273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.555314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.555359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.555400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.555441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.555491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.555535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.555577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.555618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.555660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.555706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.555747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.555789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.555830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.555871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.555913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.555955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.555979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.555997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.556021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.556038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.556063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.556080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.556104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.556121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.556145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.556162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.556186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.556207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.556232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.556249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.556274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.556291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.556315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.556332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.556356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.556373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.556397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.556414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.556439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.556456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.556486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.712 [2024-07-24 19:18:48.556505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:02.712 [2024-07-24 19:18:48.556529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.556547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.556571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.556588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.556612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.556629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.556654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.556671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.556695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.556716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.556742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.556759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.556783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.556800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.556824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.556842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.556866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.556884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.556908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.556925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.556949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.556967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.556992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.557009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.557033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.557050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.557074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.557091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.557115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.557135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.557160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.557177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.557201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.557218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.557248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.557266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.557291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.557308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.557332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.557349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.557373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.557390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.557415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.557431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.557458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.557475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.557508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.557525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.557549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.557567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.557591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.557608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.557632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.557649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.557673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.713 [2024-07-24 19:18:48.557691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.557715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.713 [2024-07-24 19:18:48.557732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.557761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.713 [2024-07-24 19:18:48.557778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.557803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.713 [2024-07-24 19:18:48.557820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.557844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.713 [2024-07-24 19:18:48.557861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.564720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.713 [2024-07-24 19:18:48.564753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.564781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.713 [2024-07-24 19:18:48.564799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.564824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.713 [2024-07-24 19:18:48.564842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.564867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.713 [2024-07-24 19:18:48.564884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:02.713 [2024-07-24 19:18:48.564909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.713 [2024-07-24 19:18:48.564926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.564952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.714 [2024-07-24 19:18:48.564969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.565289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.714 [2024-07-24 19:18:48.565313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.565371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.714 [2024-07-24 19:18:48.565393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.565425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.714 [2024-07-24 19:18:48.565443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.565474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.714 [2024-07-24 19:18:48.565511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.565544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.714 [2024-07-24 19:18:48.565561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.565592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.714 [2024-07-24 19:18:48.565610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.565641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.714 [2024-07-24 19:18:48.565659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.565690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.714 [2024-07-24 19:18:48.565707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.565737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.714 [2024-07-24 19:18:48.565755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.565786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.714 [2024-07-24 19:18:48.565803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.565834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.714 [2024-07-24 19:18:48.565851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.565882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.714 [2024-07-24 19:18:48.565901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.565931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.714 [2024-07-24 19:18:48.565949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.565979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.714 [2024-07-24 19:18:48.565997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.566028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.714 [2024-07-24 19:18:48.566045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.566076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.714 [2024-07-24 19:18:48.566097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.566129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.714 [2024-07-24 19:18:48.566146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.566178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.714 [2024-07-24 19:18:48.566195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.566225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.714 [2024-07-24 19:18:48.566243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.566274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.714 [2024-07-24 19:18:48.566292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.566322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.714 [2024-07-24 19:18:48.566339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.566369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.714 [2024-07-24 19:18:48.566386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.566417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.714 [2024-07-24 19:18:48.566435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.566467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.714 [2024-07-24 19:18:48.566493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.566526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.714 [2024-07-24 19:18:48.566544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.566575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.714 [2024-07-24 19:18:48.566592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.566623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.714 [2024-07-24 19:18:48.566641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.566671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.714 [2024-07-24 19:18:48.566689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.566724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.714 [2024-07-24 19:18:48.566742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.566774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.714 [2024-07-24 19:18:48.566792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.566823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.714 [2024-07-24 19:18:48.566840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.566871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.714 [2024-07-24 19:18:48.566888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.566919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.714 [2024-07-24 19:18:48.566936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.566967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.714 [2024-07-24 19:18:48.566985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:02.714 [2024-07-24 19:18:48.567015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.715 [2024-07-24 19:18:48.567032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:18:48.567063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.715 [2024-07-24 19:18:48.567081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:18:48.567111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.715 [2024-07-24 19:18:48.567128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:18:48.567159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.715 [2024-07-24 19:18:48.567176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:18:48.567206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.715 [2024-07-24 19:18:48.567224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:18:48.567254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.715 [2024-07-24 19:18:48.567271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:18:48.567306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.715 [2024-07-24 19:18:48.567324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:18:48.567354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.715 [2024-07-24 19:18:48.567372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:18:48.567402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.715 [2024-07-24 19:18:48.567420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:18:48.567451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.715 [2024-07-24 19:18:48.567468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:18:48.567505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.715 [2024-07-24 19:18:48.567524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:18:48.567554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.715 [2024-07-24 19:18:48.567572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:18:48.567602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.715 [2024-07-24 19:18:48.567619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:18:48.567651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.715 [2024-07-24 19:18:48.567668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:18:48.567698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.715 [2024-07-24 19:18:48.567715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:18:48.567746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.715 [2024-07-24 19:18:48.567763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:18:48.567794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.715 [2024-07-24 19:18:48.567810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:18:48.567841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.715 [2024-07-24 19:18:48.567858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:18:48.567893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.715 [2024-07-24 19:18:48.567911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:18:48.567941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.715 [2024-07-24 19:18:48.567959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:18:48.568133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.715 [2024-07-24 19:18:48.568155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:19:05.627540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.715 [2024-07-24 19:19:05.627621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:19:05.627661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.715 [2024-07-24 19:19:05.627680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:19:05.628038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.715 [2024-07-24 19:19:05.628063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:19:05.628090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.715 [2024-07-24 19:19:05.628108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:19:05.628133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.715 [2024-07-24 19:19:05.628150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:19:05.628174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.715 [2024-07-24 19:19:05.628192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:19:05.628217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.715 [2024-07-24 19:19:05.628234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:19:05.628259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.715 [2024-07-24 19:19:05.628276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:19:05.628301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.715 [2024-07-24 19:19:05.628318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:19:05.628343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.715 [2024-07-24 19:19:05.628373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:02.715 [2024-07-24 19:19:05.628398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.715 [2024-07-24 19:19:05.628416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.628441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.716 [2024-07-24 19:19:05.628458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.628490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.716 [2024-07-24 19:19:05.628509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.628534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.716 [2024-07-24 19:19:05.628552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.628576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.716 [2024-07-24 19:19:05.628593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.628617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.716 [2024-07-24 19:19:05.628634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.628659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.716 [2024-07-24 19:19:05.628677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.628701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.716 [2024-07-24 19:19:05.628718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.628742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.716 [2024-07-24 19:19:05.628760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.628784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.716 [2024-07-24 19:19:05.628801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.628825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.716 [2024-07-24 19:19:05.628842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.628867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.716 [2024-07-24 19:19:05.628888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.628913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.716 [2024-07-24 19:19:05.628930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.628955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.716 [2024-07-24 19:19:05.628971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.628996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.716 [2024-07-24 19:19:05.629013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.629037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.716 [2024-07-24 19:19:05.629054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.629078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.716 [2024-07-24 19:19:05.629096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.629120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.716 [2024-07-24 19:19:05.629137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.629161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.716 [2024-07-24 19:19:05.629178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.629203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.716 [2024-07-24 19:19:05.629220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.629244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.716 [2024-07-24 19:19:05.629261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.629286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.716 [2024-07-24 19:19:05.629303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.629328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.716 [2024-07-24 19:19:05.629345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.629370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.716 [2024-07-24 19:19:05.629387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.631507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.716 [2024-07-24 19:19:05.631536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.631571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.716 [2024-07-24 19:19:05.631600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.631641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.716 [2024-07-24 19:19:05.631662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.631688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.716 [2024-07-24 19:19:05.631705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.631729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.716 [2024-07-24 19:19:05.631746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.631775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.716 [2024-07-24 19:19:05.631794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.631819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.716 [2024-07-24 19:19:05.631836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.631865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.716 [2024-07-24 19:19:05.631893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.631922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.716 [2024-07-24 19:19:05.631940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.631965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.716 [2024-07-24 19:19:05.631982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.632007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.716 [2024-07-24 19:19:05.632024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.632054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.716 [2024-07-24 19:19:05.632071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:02.716 [2024-07-24 19:19:05.632101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.716 [2024-07-24 19:19:05.632119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.632144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.717 [2024-07-24 19:19:05.632161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.632185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.717 [2024-07-24 19:19:05.632202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.632227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.717 [2024-07-24 19:19:05.632244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.632268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.717 [2024-07-24 19:19:05.632285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.632310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.717 [2024-07-24 19:19:05.632330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.632355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.717 [2024-07-24 19:19:05.632373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.632397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.717 [2024-07-24 19:19:05.632414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.632438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.717 [2024-07-24 19:19:05.632455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.632487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.717 [2024-07-24 19:19:05.632506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.632532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.717 [2024-07-24 19:19:05.632549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.633666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.717 [2024-07-24 19:19:05.633694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.633725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.717 [2024-07-24 19:19:05.633749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.633775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.717 [2024-07-24 19:19:05.633792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.633817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.717 [2024-07-24 19:19:05.633834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.633859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.717 [2024-07-24 19:19:05.633876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.633900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.717 [2024-07-24 19:19:05.633917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.633942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.717 [2024-07-24 19:19:05.633959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.633983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.717 [2024-07-24 19:19:05.634000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.634025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.717 [2024-07-24 19:19:05.634042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.634067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.717 [2024-07-24 19:19:05.634084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.634108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.717 [2024-07-24 19:19:05.634125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.634150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.717 [2024-07-24 19:19:05.634167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.634191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.717 [2024-07-24 19:19:05.634208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.634232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.717 [2024-07-24 19:19:05.634253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.634278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.717 [2024-07-24 19:19:05.634295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.634320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.717 [2024-07-24 19:19:05.634337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.634375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.717 [2024-07-24 19:19:05.634398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.634425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.717 [2024-07-24 19:19:05.634442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.634471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.717 [2024-07-24 19:19:05.634509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.634541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.717 [2024-07-24 19:19:05.634559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.634583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.717 [2024-07-24 19:19:05.634601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.634625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.717 [2024-07-24 19:19:05.634647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.634672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.717 [2024-07-24 19:19:05.634690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.634714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.717 [2024-07-24 19:19:05.634732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.634756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.717 [2024-07-24 19:19:05.634773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.634798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.717 [2024-07-24 19:19:05.634815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:02.717 [2024-07-24 19:19:05.634844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.718 [2024-07-24 19:19:05.634862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.634887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.718 [2024-07-24 19:19:05.634909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.634934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.718 [2024-07-24 19:19:05.634951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.634976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.718 [2024-07-24 19:19:05.634993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.635018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.718 [2024-07-24 19:19:05.635035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.635059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.718 [2024-07-24 19:19:05.635076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.635100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.718 [2024-07-24 19:19:05.635117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.635142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.718 [2024-07-24 19:19:05.635162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.635187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.718 [2024-07-24 19:19:05.635205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.635229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.718 [2024-07-24 19:19:05.635246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.635271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.718 [2024-07-24 19:19:05.635288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.636006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.718 [2024-07-24 19:19:05.636032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.636067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.718 [2024-07-24 19:19:05.636087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.636112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.718 [2024-07-24 19:19:05.636130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.636155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.718 [2024-07-24 19:19:05.636172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.636197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.718 [2024-07-24 19:19:05.636214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.636238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.718 [2024-07-24 19:19:05.636256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.636280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.718 [2024-07-24 19:19:05.636297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.636321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.718 [2024-07-24 19:19:05.636338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.636363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.718 [2024-07-24 19:19:05.636380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.636404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.718 [2024-07-24 19:19:05.636421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.636445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.718 [2024-07-24 19:19:05.636463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.636494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.718 [2024-07-24 19:19:05.636514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.636538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.718 [2024-07-24 19:19:05.636555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.636580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.718 [2024-07-24 19:19:05.636602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.636627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.718 [2024-07-24 19:19:05.636644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.636669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.718 [2024-07-24 19:19:05.636686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.636711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.718 [2024-07-24 19:19:05.636728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.636753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.718 [2024-07-24 19:19:05.636770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.637397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.718 [2024-07-24 19:19:05.637424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.637454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.718 [2024-07-24 19:19:05.637473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.637508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.718 [2024-07-24 19:19:05.637527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.637552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.718 [2024-07-24 19:19:05.637569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.637594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.718 [2024-07-24 19:19:05.637611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.637636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.718 [2024-07-24 19:19:05.637653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.637678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.718 [2024-07-24 19:19:05.637695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.637719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.718 [2024-07-24 19:19:05.637742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:02.718 [2024-07-24 19:19:05.637767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.719 [2024-07-24 19:19:05.637785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.637810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.719 [2024-07-24 19:19:05.637827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.637852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.719 [2024-07-24 19:19:05.637878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.637915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.719 [2024-07-24 19:19:05.637934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.637958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.719 [2024-07-24 19:19:05.637976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.638000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.719 [2024-07-24 19:19:05.638018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.638042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.719 [2024-07-24 19:19:05.638059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.638083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.719 [2024-07-24 19:19:05.638105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.638130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.719 [2024-07-24 19:19:05.638147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.638172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.719 [2024-07-24 19:19:05.638189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.638214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.719 [2024-07-24 19:19:05.638231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.638255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.719 [2024-07-24 19:19:05.638272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.638302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.719 [2024-07-24 19:19:05.638319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.638352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.719 [2024-07-24 19:19:05.638371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.638395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.719 [2024-07-24 19:19:05.638413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.638437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.719 [2024-07-24 19:19:05.638454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.638485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.719 [2024-07-24 19:19:05.638504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.638529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.719 [2024-07-24 19:19:05.638546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.640377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.719 [2024-07-24 19:19:05.640404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.640435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.719 [2024-07-24 19:19:05.640453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.640485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.719 [2024-07-24 19:19:05.640505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.640530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.719 [2024-07-24 19:19:05.640547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.640572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.719 [2024-07-24 19:19:05.640589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.640613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.719 [2024-07-24 19:19:05.640631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.640661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.719 [2024-07-24 19:19:05.640678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.640703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.719 [2024-07-24 19:19:05.640720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.640745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.719 [2024-07-24 19:19:05.640762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.640786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.719 [2024-07-24 19:19:05.640803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.640828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.719 [2024-07-24 19:19:05.640845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.640870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.719 [2024-07-24 19:19:05.640887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:02.719 [2024-07-24 19:19:05.640911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.719 [2024-07-24 19:19:05.640928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.640952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.720 [2024-07-24 19:19:05.640970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.640994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.720 [2024-07-24 19:19:05.641011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.641035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.720 [2024-07-24 19:19:05.641052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.641077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.720 [2024-07-24 19:19:05.641094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.641118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.720 [2024-07-24 19:19:05.641135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.641159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.720 [2024-07-24 19:19:05.641180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.641205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.720 [2024-07-24 19:19:05.641222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.641246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.720 [2024-07-24 19:19:05.641263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.641288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.720 [2024-07-24 19:19:05.641305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.641329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.720 [2024-07-24 19:19:05.641346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.641371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.720 [2024-07-24 19:19:05.641388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.641412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.720 [2024-07-24 19:19:05.641429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.641454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.720 [2024-07-24 19:19:05.641472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.644791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.720 [2024-07-24 19:19:05.644822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.644854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.720 [2024-07-24 19:19:05.644873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.644899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.720 [2024-07-24 19:19:05.644916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.644940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.720 [2024-07-24 19:19:05.644962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.644988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.720 [2024-07-24 19:19:05.645011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.645036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.720 [2024-07-24 19:19:05.645054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.645078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.720 [2024-07-24 19:19:05.645095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.645120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.720 [2024-07-24 19:19:05.645137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.645161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.720 [2024-07-24 19:19:05.645178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.645203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.720 [2024-07-24 19:19:05.645220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.645246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.720 [2024-07-24 19:19:05.645264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.645288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.720 [2024-07-24 19:19:05.645305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.645331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.720 [2024-07-24 19:19:05.645350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.645388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.720 [2024-07-24 19:19:05.645410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.645437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.720 [2024-07-24 19:19:05.645454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.645486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.720 [2024-07-24 19:19:05.645505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.645537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.720 [2024-07-24 19:19:05.645569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.645602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.720 [2024-07-24 19:19:05.645620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.645645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.720 [2024-07-24 19:19:05.645662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.645687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.720 [2024-07-24 19:19:05.645703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.645728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.720 [2024-07-24 19:19:05.645750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.645775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.720 [2024-07-24 19:19:05.645793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.645817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.720 [2024-07-24 19:19:05.645834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:02.720 [2024-07-24 19:19:05.645859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.721 [2024-07-24 19:19:05.645876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.645900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.721 [2024-07-24 19:19:05.645919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.645944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.721 [2024-07-24 19:19:05.645962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.645988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.721 [2024-07-24 19:19:05.646005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.646030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.721 [2024-07-24 19:19:05.646046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.646071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.721 [2024-07-24 19:19:05.646087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.646116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.721 [2024-07-24 19:19:05.646134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.646159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.721 [2024-07-24 19:19:05.646176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.646200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.721 [2024-07-24 19:19:05.646218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.646242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.721 [2024-07-24 19:19:05.646259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.646284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.721 [2024-07-24 19:19:05.646301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.646325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.721 [2024-07-24 19:19:05.646343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.646367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.721 [2024-07-24 19:19:05.646384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.646410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.721 [2024-07-24 19:19:05.646427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.646452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.721 [2024-07-24 19:19:05.646469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.646503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.721 [2024-07-24 19:19:05.646530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.646556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.721 [2024-07-24 19:19:05.646574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.648018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.721 [2024-07-24 19:19:05.648045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.648076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.721 [2024-07-24 19:19:05.648099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.648126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.721 [2024-07-24 19:19:05.648143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.648168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.721 [2024-07-24 19:19:05.648185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.648209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.721 [2024-07-24 19:19:05.648226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.648251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.721 [2024-07-24 19:19:05.648268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.648292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.721 [2024-07-24 19:19:05.648317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.648354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.721 [2024-07-24 19:19:05.648373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.648398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.721 [2024-07-24 19:19:05.648415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.648439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.721 [2024-07-24 19:19:05.648456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.648490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.721 [2024-07-24 19:19:05.648509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.648534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.721 [2024-07-24 19:19:05.648552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.648580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.721 [2024-07-24 19:19:05.648599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.648623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.721 [2024-07-24 19:19:05.648648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.648673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.721 [2024-07-24 19:19:05.648690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.648715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.721 [2024-07-24 19:19:05.648732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.648757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.721 [2024-07-24 19:19:05.648773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.648798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.721 [2024-07-24 19:19:05.648815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.648848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.721 [2024-07-24 19:19:05.648866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:02.721 [2024-07-24 19:19:05.648891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.721 [2024-07-24 19:19:05.648908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.648932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.722 [2024-07-24 19:19:05.648949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.648974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.722 [2024-07-24 19:19:05.648991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.649018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.722 [2024-07-24 19:19:05.649034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.649059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.722 [2024-07-24 19:19:05.649076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.649104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.722 [2024-07-24 19:19:05.649122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.649147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.722 [2024-07-24 19:19:05.649164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.649193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.722 [2024-07-24 19:19:05.649210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.649235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.722 [2024-07-24 19:19:05.649252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.649901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.722 [2024-07-24 19:19:05.649926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.649961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.722 [2024-07-24 19:19:05.649981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.650007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.722 [2024-07-24 19:19:05.650024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.650049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.722 [2024-07-24 19:19:05.650066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.650091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.722 [2024-07-24 19:19:05.650108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.650132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.722 [2024-07-24 19:19:05.650149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.650173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.722 [2024-07-24 19:19:05.650190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.650214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.722 [2024-07-24 19:19:05.650231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.650256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.722 [2024-07-24 19:19:05.650273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.650298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.722 [2024-07-24 19:19:05.650315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.652675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.722 [2024-07-24 19:19:05.652704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.652734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.722 [2024-07-24 19:19:05.652754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.652779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.722 [2024-07-24 19:19:05.652796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.652821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.722 [2024-07-24 19:19:05.652838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.652862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.722 [2024-07-24 19:19:05.652879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.652904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.722 [2024-07-24 19:19:05.652921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.652945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.722 [2024-07-24 19:19:05.652962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.652987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.722 [2024-07-24 19:19:05.653004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.653029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.722 [2024-07-24 19:19:05.653046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.653071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.722 [2024-07-24 19:19:05.653088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.653112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.722 [2024-07-24 19:19:05.653129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.653153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.722 [2024-07-24 19:19:05.653170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.653200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.722 [2024-07-24 19:19:05.653218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.653243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.722 [2024-07-24 19:19:05.653260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.653284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.722 [2024-07-24 19:19:05.653301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.653325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.722 [2024-07-24 19:19:05.653342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.653366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.722 [2024-07-24 19:19:05.653383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.653408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.722 [2024-07-24 19:19:05.653425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.722 [2024-07-24 19:19:05.653449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.723 [2024-07-24 19:19:05.653466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.653499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.653518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.653543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.653560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.653584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.723 [2024-07-24 19:19:05.653601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.653626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.723 [2024-07-24 19:19:05.653643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.653668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.653685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.653709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.653731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.653757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.723 [2024-07-24 19:19:05.653774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.653799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.653816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.653840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.653858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.653882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.723 [2024-07-24 19:19:05.653900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.653924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.723 [2024-07-24 19:19:05.653941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.653965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.723 [2024-07-24 19:19:05.653982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.654007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.723 [2024-07-24 19:19:05.654024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.656282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.656311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.656342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.656370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.656411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.656435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.656462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.656487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.656514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.656538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.656563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.656580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.656604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.656621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.656646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.656663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.656687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.656705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.656730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.656747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.656771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.656788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.656813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.656830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.656856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.723 [2024-07-24 19:19:05.656873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.656897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.723 [2024-07-24 19:19:05.656914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.656939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.723 [2024-07-24 19:19:05.656956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.656987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.723 [2024-07-24 19:19:05.657014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.657042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.723 [2024-07-24 19:19:05.657060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.657089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.657107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.657132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.657148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.657174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.657191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.658895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.658922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.658957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.723 [2024-07-24 19:19:05.658977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.659002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.723 [2024-07-24 19:19:05.659019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:02.723 [2024-07-24 19:19:05.659044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.724 [2024-07-24 19:19:05.659061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.659086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.724 [2024-07-24 19:19:05.659103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.659128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.724 [2024-07-24 19:19:05.659145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.659170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.724 [2024-07-24 19:19:05.659187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.659216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.724 [2024-07-24 19:19:05.659234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.659258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.724 [2024-07-24 19:19:05.659276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.659305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.724 [2024-07-24 19:19:05.659323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.659348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.724 [2024-07-24 19:19:05.659365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.659389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.724 [2024-07-24 19:19:05.659406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.659431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.724 [2024-07-24 19:19:05.659448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.659473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.724 [2024-07-24 19:19:05.659498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.659524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.724 [2024-07-24 19:19:05.659542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.659566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.724 [2024-07-24 19:19:05.659584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.659608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.724 [2024-07-24 19:19:05.659625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.659650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.724 [2024-07-24 19:19:05.659667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.659692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.724 [2024-07-24 19:19:05.659709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.659733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.724 [2024-07-24 19:19:05.659750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.659775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.724 [2024-07-24 19:19:05.659792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.659817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.724 [2024-07-24 19:19:05.659839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.659864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.724 [2024-07-24 19:19:05.659882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.659906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.724 [2024-07-24 19:19:05.659923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.659947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.724 [2024-07-24 19:19:05.659964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.659988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.724 [2024-07-24 19:19:05.660006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.660030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.724 [2024-07-24 19:19:05.660047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.660072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.724 [2024-07-24 19:19:05.660089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.660114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.724 [2024-07-24 19:19:05.660131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.660156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.724 [2024-07-24 19:19:05.660173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.660198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.724 [2024-07-24 19:19:05.660215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.660240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.724 [2024-07-24 19:19:05.660257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.660281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.724 [2024-07-24 19:19:05.660298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.660323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.724 [2024-07-24 19:19:05.660344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:02.724 [2024-07-24 19:19:05.660369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.724 [2024-07-24 19:19:05.660386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.660411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.725 [2024-07-24 19:19:05.660428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.660452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.725 [2024-07-24 19:19:05.660469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.660501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.725 [2024-07-24 19:19:05.660520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.660545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.725 [2024-07-24 19:19:05.660562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.660586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.725 [2024-07-24 19:19:05.660604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.660628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.725 [2024-07-24 19:19:05.660645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.660670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.725 [2024-07-24 19:19:05.660686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.660711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.725 [2024-07-24 19:19:05.660728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.660753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.725 [2024-07-24 19:19:05.660770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.662151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.725 [2024-07-24 19:19:05.662178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.662209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.725 [2024-07-24 19:19:05.662228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.662259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.725 [2024-07-24 19:19:05.662277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.662301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.725 [2024-07-24 19:19:05.662319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.662343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.725 [2024-07-24 19:19:05.662360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.662385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.725 [2024-07-24 19:19:05.662402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.663811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.725 [2024-07-24 19:19:05.663838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.663874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.725 [2024-07-24 19:19:05.663893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.663918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.725 [2024-07-24 19:19:05.663935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.663960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.725 [2024-07-24 19:19:05.663986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.664023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.725 [2024-07-24 19:19:05.664042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.664066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.725 [2024-07-24 19:19:05.664084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.664109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.725 [2024-07-24 19:19:05.664127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.664151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.725 [2024-07-24 19:19:05.664167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.664198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.725 [2024-07-24 19:19:05.664216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.664240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.725 [2024-07-24 19:19:05.664257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.664282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.725 [2024-07-24 19:19:05.664299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.664324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.725 [2024-07-24 19:19:05.664341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.664365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.725 [2024-07-24 19:19:05.664382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.664406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.725 [2024-07-24 19:19:05.664423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.664447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.725 [2024-07-24 19:19:05.664464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.664496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.725 [2024-07-24 19:19:05.664515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.664539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.725 [2024-07-24 19:19:05.664557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.664582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.725 [2024-07-24 19:19:05.664599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.664627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.725 [2024-07-24 19:19:05.664645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.664670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.725 [2024-07-24 19:19:05.664688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.664712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.725 [2024-07-24 19:19:05.664734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:02.725 [2024-07-24 19:19:05.664759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.726 [2024-07-24 19:19:05.664776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.664801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.726 [2024-07-24 19:19:05.664818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.664842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.726 [2024-07-24 19:19:05.664859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.664900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.726 [2024-07-24 19:19:05.664920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.664946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.726 [2024-07-24 19:19:05.664963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.664987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.726 [2024-07-24 19:19:05.665004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.665029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.726 [2024-07-24 19:19:05.665046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.665070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.726 [2024-07-24 19:19:05.665087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.665111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.726 [2024-07-24 19:19:05.665128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.665153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.726 [2024-07-24 19:19:05.665170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.665194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.726 [2024-07-24 19:19:05.665211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.665236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.726 [2024-07-24 19:19:05.665257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.665282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.726 [2024-07-24 19:19:05.665299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.665323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.726 [2024-07-24 19:19:05.665340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.665364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.726 [2024-07-24 19:19:05.665381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.665406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.726 [2024-07-24 19:19:05.665422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.665447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.726 [2024-07-24 19:19:05.665464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.665494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.726 [2024-07-24 19:19:05.665528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.665558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.726 [2024-07-24 19:19:05.665575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.666918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.726 [2024-07-24 19:19:05.666946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.666976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.726 [2024-07-24 19:19:05.666995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.667020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.726 [2024-07-24 19:19:05.667038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.669600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.726 [2024-07-24 19:19:05.669627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.669658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.726 [2024-07-24 19:19:05.669677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.669708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.726 [2024-07-24 19:19:05.669726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.669751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.726 [2024-07-24 19:19:05.669769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.669793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.726 [2024-07-24 19:19:05.669811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.669835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.726 [2024-07-24 19:19:05.669853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.669877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.726 [2024-07-24 19:19:05.669894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.669919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.726 [2024-07-24 19:19:05.669936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.669961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.726 [2024-07-24 19:19:05.669978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.670002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.726 [2024-07-24 19:19:05.670019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.670044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.726 [2024-07-24 19:19:05.670061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.670086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.726 [2024-07-24 19:19:05.670103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.670127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.726 [2024-07-24 19:19:05.670144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.670169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.726 [2024-07-24 19:19:05.670186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.670218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.726 [2024-07-24 19:19:05.670236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:02.726 [2024-07-24 19:19:05.670261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.670278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.670303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.670320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.670344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.670361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.670385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.670402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.670427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.727 [2024-07-24 19:19:05.670443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.670468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.727 [2024-07-24 19:19:05.670493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.670520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.670537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.670562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.670579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.670603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.727 [2024-07-24 19:19:05.670620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.670645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.670662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.670686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.670703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.670728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.727 [2024-07-24 19:19:05.670757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.670782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.727 [2024-07-24 19:19:05.670799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.670824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.727 [2024-07-24 19:19:05.670841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.670865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.670882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.670907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.670924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.670949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.670966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.670991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.671008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.671033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.727 [2024-07-24 19:19:05.671050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.671074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.727 [2024-07-24 19:19:05.671091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.671116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.727 [2024-07-24 19:19:05.671132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.671157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.727 [2024-07-24 19:19:05.671174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.671199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.671215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.671240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.671261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.671286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.671303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.671328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.671345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.671369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.671387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.671411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.727 [2024-07-24 19:19:05.671428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.671452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.671470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.671502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.671521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.671546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.671564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.671589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.727 [2024-07-24 19:19:05.671606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.671631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.671649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.671675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.671692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.673031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.727 [2024-07-24 19:19:05.673058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.673089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.727 [2024-07-24 19:19:05.673108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.673139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.727 [2024-07-24 19:19:05.673157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:02.727 [2024-07-24 19:19:05.673182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.673199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.673223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.673240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.673265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.673282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.673306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.673323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.673348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.673365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.674946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.674977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.675026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.675070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.675115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.675182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.675230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.675278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.728 [2024-07-24 19:19:05.675320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.728 [2024-07-24 19:19:05.675362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.728 [2024-07-24 19:19:05.675403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.728 [2024-07-24 19:19:05.675445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.728 [2024-07-24 19:19:05.675495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.675539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.675580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.675621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.675663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.675704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.728 [2024-07-24 19:19:05.675746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.675791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.728 [2024-07-24 19:19:05.675840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.728 [2024-07-24 19:19:05.675883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.675925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.728 [2024-07-24 19:19:05.675970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.675995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.676013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.676038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.728 [2024-07-24 19:19:05.676055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.676080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.676097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.676122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.728 [2024-07-24 19:19:05.676139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.676164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.728 [2024-07-24 19:19:05.676181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.677347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.677374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.677404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.728 [2024-07-24 19:19:05.677423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.677448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.728 [2024-07-24 19:19:05.677471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:02.728 [2024-07-24 19:19:05.677506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.728 [2024-07-24 19:19:05.677524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.677549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.729 [2024-07-24 19:19:05.677566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.677591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.729 [2024-07-24 19:19:05.677608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.677633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.729 [2024-07-24 19:19:05.677649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.677674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.729 [2024-07-24 19:19:05.677691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.677715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.729 [2024-07-24 19:19:05.677732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.677757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.729 [2024-07-24 19:19:05.677774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.677798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.729 [2024-07-24 19:19:05.677815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.677851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.729 [2024-07-24 19:19:05.677875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.677907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.729 [2024-07-24 19:19:05.677925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.677951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.729 [2024-07-24 19:19:05.677968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.677992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.729 [2024-07-24 19:19:05.678009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.678038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.729 [2024-07-24 19:19:05.678056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.678081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.729 [2024-07-24 19:19:05.678098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.678122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.729 [2024-07-24 19:19:05.678139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.678163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.729 [2024-07-24 19:19:05.678181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.678205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.729 [2024-07-24 19:19:05.678222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.678250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.729 [2024-07-24 19:19:05.678277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.678308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.729 [2024-07-24 19:19:05.678326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.678350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.729 [2024-07-24 19:19:05.678367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.678391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.729 [2024-07-24 19:19:05.678408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.678433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.729 [2024-07-24 19:19:05.678450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.678475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.729 [2024-07-24 19:19:05.678500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.679171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.729 [2024-07-24 19:19:05.679197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.679232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.729 [2024-07-24 19:19:05.679251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.679277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.729 [2024-07-24 19:19:05.679295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.679320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.729 [2024-07-24 19:19:05.679338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.679363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.729 [2024-07-24 19:19:05.679380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.679405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.729 [2024-07-24 19:19:05.679423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.679448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.729 [2024-07-24 19:19:05.679465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.679496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.729 [2024-07-24 19:19:05.679515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:02.729 [2024-07-24 19:19:05.679540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.729 [2024-07-24 19:19:05.679558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.679582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.679599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.679624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.679641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.679666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.730 [2024-07-24 19:19:05.679683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.679707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.730 [2024-07-24 19:19:05.679724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.679748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.679770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.679796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.679813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.679838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.730 [2024-07-24 19:19:05.679854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.679879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.730 [2024-07-24 19:19:05.679896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.679920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.730 [2024-07-24 19:19:05.679937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.679962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.679980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.680428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.680453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.680490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.680514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.680540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.680563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.680588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.680606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.680630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.680647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.680672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.680689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.680713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.680736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.680760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.730 [2024-07-24 19:19:05.680778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.680802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.680820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.680844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.680861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.680886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.730 [2024-07-24 19:19:05.680903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.680928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.680944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.680968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.680986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.681010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.681027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.681051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.681068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.681100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.681128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.681156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.681174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.681199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.730 [2024-07-24 19:19:05.681216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.681240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.730 [2024-07-24 19:19:05.681261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.681287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.730 [2024-07-24 19:19:05.681307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.681332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.730 [2024-07-24 19:19:05.681350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.681374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.730 [2024-07-24 19:19:05.681391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.681416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.730 [2024-07-24 19:19:05.681436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.683963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.683991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.684021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.684039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.684065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.730 [2024-07-24 19:19:05.684082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:02.730 [2024-07-24 19:19:05.684106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.730 [2024-07-24 19:19:05.684124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.684148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.731 [2024-07-24 19:19:05.684166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.684190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.731 [2024-07-24 19:19:05.684207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.684232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.731 [2024-07-24 19:19:05.684249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.684273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.731 [2024-07-24 19:19:05.684290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.684320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.731 [2024-07-24 19:19:05.684338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.684363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.731 [2024-07-24 19:19:05.684380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.684404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.731 [2024-07-24 19:19:05.684421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.684446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.731 [2024-07-24 19:19:05.684463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.684495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.731 [2024-07-24 19:19:05.684514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.684539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.731 [2024-07-24 19:19:05.684556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.684581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.731 [2024-07-24 19:19:05.684598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.684622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.731 [2024-07-24 19:19:05.684639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.684664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.731 [2024-07-24 19:19:05.684681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.684705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.731 [2024-07-24 19:19:05.684722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.684746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.731 [2024-07-24 19:19:05.684764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.684788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.731 [2024-07-24 19:19:05.684805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.684836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.731 [2024-07-24 19:19:05.684854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.684878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.731 [2024-07-24 19:19:05.684896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.684920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.731 [2024-07-24 19:19:05.684937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.684962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.731 [2024-07-24 19:19:05.684979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.687598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.731 [2024-07-24 19:19:05.687627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.687680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.731 [2024-07-24 19:19:05.687701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.687727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.731 [2024-07-24 19:19:05.687745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.687770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.731 [2024-07-24 19:19:05.687787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.687811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.731 [2024-07-24 19:19:05.687829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.687853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.731 [2024-07-24 19:19:05.687871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.687895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.731 [2024-07-24 19:19:05.687912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.687936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.731 [2024-07-24 19:19:05.687954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.687978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.731 [2024-07-24 19:19:05.688001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.688026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.731 [2024-07-24 19:19:05.688043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.688067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.731 [2024-07-24 19:19:05.688084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.688109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.731 [2024-07-24 19:19:05.688125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.688150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.731 [2024-07-24 19:19:05.688167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.688192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.731 [2024-07-24 19:19:05.688209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.688233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.731 [2024-07-24 19:19:05.688250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.731 [2024-07-24 19:19:05.688275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.731 [2024-07-24 19:19:05.688292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.732 [2024-07-24 19:19:05.688324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.732 [2024-07-24 19:19:05.688342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.732 [2024-07-24 19:19:05.688367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.732 [2024-07-24 19:19:05.688384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:02.732 [2024-07-24 19:19:05.688409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.732 [2024-07-24 19:19:05.688426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:02.732 [2024-07-24 19:19:05.688454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.732 [2024-07-24 19:19:05.688472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:02.732 [2024-07-24 19:19:05.688513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.732 [2024-07-24 19:19:05.688538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:02.732 [2024-07-24 19:19:05.688564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.732 [2024-07-24 19:19:05.688581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:02.732 [2024-07-24 19:19:05.688606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.732 [2024-07-24 19:19:05.688623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:02.732 [2024-07-24 19:19:05.688647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.732 [2024-07-24 19:19:05.688664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:02.732 [2024-07-24 19:19:05.688689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.732 [2024-07-24 19:19:05.688706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:02.732 [2024-07-24 19:19:05.688731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.732 [2024-07-24 19:19:05.688747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:02.732 [2024-07-24 19:19:05.688772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.732 [2024-07-24 19:19:05.688789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:02.732 [2024-07-24 19:19:05.688813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.732 [2024-07-24 19:19:05.688830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:02.732 Received shutdown signal, test time was about 35.082806 seconds 00:22:02.732 00:22:02.732 Latency(us) 00:22:02.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.732 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:02.732 Verification LBA range: start 0x0 length 0x4000 00:22:02.732 Nvme0n1 : 35.08 7201.12 28.13 0.00 0.00 17743.31 194.18 4076242.11 00:22:02.732 =================================================================================================================== 00:22:02.732 Total : 7201.12 28.13 0.00 0.00 17743.31 194.18 4076242.11 00:22:02.732 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:02.992 rmmod nvme_tcp 00:22:02.992 rmmod nvme_fabrics 00:22:02.992 rmmod nvme_keyring 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2617858 ']' 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2617858 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2617858 ']' 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2617858 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2617858 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2617858' 00:22:02.992 killing process with pid 2617858 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2617858 00:22:02.992 19:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2617858 00:22:03.251 19:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:03.251 19:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:03.251 19:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:03.251 19:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:03.251 19:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:03.251 19:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.251 19:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.251 19:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:05.796 00:22:05.796 real 0m43.421s 00:22:05.796 user 2m11.367s 00:22:05.796 sys 0m11.614s 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:05.796 ************************************ 00:22:05.796 END TEST nvmf_host_multipath_status 00:22:05.796 ************************************ 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.796 ************************************ 00:22:05.796 START TEST nvmf_discovery_remove_ifc 00:22:05.796 ************************************ 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:05.796 * Looking for test storage... 00:22:05.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:05.796 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:05.797 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:05.797 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:05.797 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:05.797 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:05.797 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:05.797 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:05.797 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:05.797 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:05.797 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.797 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:05.797 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:05.797 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:05.797 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.797 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.797 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.797 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:05.797 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:05.797 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:05.797 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.176 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:22:07.177 Found 0000:08:00.0 (0x8086 - 0x159b) 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:22:07.177 Found 0000:08:00.1 (0x8086 - 0x159b) 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:22:07.177 Found net devices under 0000:08:00.0: cvl_0_0 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:22:07.177 Found net devices under 0000:08:00.1: cvl_0_1 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:07.177 19:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:07.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:07.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:22:07.177 00:22:07.177 --- 10.0.0.2 ping statistics --- 00:22:07.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.177 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:07.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:07.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:22:07.177 00:22:07.177 --- 10.0.0.1 ping statistics --- 00:22:07.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.177 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2623161 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2623161 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2623161 ']' 00:22:07.177 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.178 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:07.178 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.178 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:07.178 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:07.178 [2024-07-24 19:19:13.183462] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:22:07.178 [2024-07-24 19:19:13.183572] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.438 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.438 [2024-07-24 19:19:13.250554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.438 [2024-07-24 19:19:13.366470] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.438 [2024-07-24 19:19:13.366543] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.438 [2024-07-24 19:19:13.366559] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.438 [2024-07-24 19:19:13.366573] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.438 [2024-07-24 19:19:13.366585] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.438 [2024-07-24 19:19:13.366621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.704 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:07.704 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:22:07.704 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:07.704 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:07.704 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:07.704 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.704 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:07.704 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.704 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:07.704 [2024-07-24 19:19:13.510396] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.705 [2024-07-24 19:19:13.518595] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:07.705 null0 00:22:07.705 [2024-07-24 19:19:13.550520] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.705 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.705 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2623183 00:22:07.705 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2623183 /tmp/host.sock 00:22:07.705 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2623183 ']' 00:22:07.705 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:22:07.705 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:07.705 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:07.705 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:07.705 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:07.705 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:07.705 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:07.705 [2024-07-24 19:19:13.622177] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:22:07.705 [2024-07-24 19:19:13.622267] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2623183 ] 00:22:07.705 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.705 [2024-07-24 19:19:13.678373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.005 [2024-07-24 19:19:13.777967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.005 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:08.005 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:22:08.005 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:08.005 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:08.005 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.005 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:08.005 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.005 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:08.005 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.005 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:08.005 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.005 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:08.005 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.005 19:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:09.387 [2024-07-24 19:19:15.018316] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:09.387 [2024-07-24 19:19:15.018370] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:09.387 [2024-07-24 19:19:15.018397] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:09.387 [2024-07-24 19:19:15.146779] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:09.387 [2024-07-24 19:19:15.370996] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:09.387 [2024-07-24 19:19:15.371067] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:09.387 [2024-07-24 19:19:15.371120] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:09.387 [2024-07-24 19:19:15.371147] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:09.387 [2024-07-24 19:19:15.371184] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:09.387 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.387 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:09.387 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:09.387 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.387 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:09.387 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.387 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:09.387 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:09.387 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:09.387 [2024-07-24 19:19:15.376951] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1582da0 was disconnected and freed. delete nvme_qpair. 00:22:09.387 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.647 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:09.647 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:09.647 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:09.647 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:09.647 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:09.647 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.647 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:09.647 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.647 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:09.647 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:09.647 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:09.647 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.647 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:09.647 19:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:10.586 19:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:10.586 19:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:10.586 19:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.586 19:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:10.586 19:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:10.586 19:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:10.586 19:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:10.586 19:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.586 19:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:10.586 19:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:11.967 19:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:11.967 19:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.967 19:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:11.967 19:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.967 19:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:11.967 19:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:11.967 19:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:11.967 19:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.967 19:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:11.967 19:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:12.904 19:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:12.904 19:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.904 19:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:12.904 19:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.904 19:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:12.904 19:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:12.905 19:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:12.905 19:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.905 19:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:12.905 19:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:13.841 19:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:13.841 19:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:13.841 19:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.841 19:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:13.841 19:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:13.841 19:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:13.841 19:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:13.841 19:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.841 19:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:13.841 19:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:14.780 19:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:14.780 19:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:14.780 19:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:14.780 19:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.780 19:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:14.780 19:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:14.780 19:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:14.780 19:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.780 19:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:14.780 19:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:15.041 [2024-07-24 19:19:20.811838] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:15.041 [2024-07-24 19:19:20.811912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.041 [2024-07-24 19:19:20.811935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.041 [2024-07-24 19:19:20.811956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.041 [2024-07-24 19:19:20.811972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.041 [2024-07-24 19:19:20.811988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.041 [2024-07-24 19:19:20.812003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.041 [2024-07-24 19:19:20.812019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.041 [2024-07-24 19:19:20.812034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.041 [2024-07-24 19:19:20.812050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.041 [2024-07-24 19:19:20.812064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.041 [2024-07-24 19:19:20.812079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1549620 is same with the state(5) to be set 00:22:15.041 [2024-07-24 19:19:20.821863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1549620 (9): Bad file descriptor 00:22:15.041 [2024-07-24 19:19:20.831903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:15.982 19:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:15.982 19:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:15.982 19:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:15.982 19:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.982 19:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:15.982 19:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:15.982 19:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:15.982 [2024-07-24 19:19:21.875525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:15.982 [2024-07-24 19:19:21.875600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1549620 with addr=10.0.0.2, port=4420 00:22:15.982 [2024-07-24 19:19:21.875626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1549620 is same with the state(5) to be set 00:22:15.982 [2024-07-24 19:19:21.875678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1549620 (9): Bad file descriptor 00:22:15.982 [2024-07-24 19:19:21.876122] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.982 [2024-07-24 19:19:21.876176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:15.982 [2024-07-24 19:19:21.876193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:15.982 [2024-07-24 19:19:21.876210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:15.982 [2024-07-24 19:19:21.876241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.982 [2024-07-24 19:19:21.876257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:15.982 19:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.982 19:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:15.982 19:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:16.921 [2024-07-24 19:19:22.878757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:16.921 [2024-07-24 19:19:22.878828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:16.921 [2024-07-24 19:19:22.878846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:16.921 [2024-07-24 19:19:22.878864] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:16.921 [2024-07-24 19:19:22.878897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:16.921 [2024-07-24 19:19:22.878937] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:16.921 [2024-07-24 19:19:22.879001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:16.921 [2024-07-24 19:19:22.879024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.921 [2024-07-24 19:19:22.879046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:16.921 [2024-07-24 19:19:22.879061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.921 [2024-07-24 19:19:22.879077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:16.921 [2024-07-24 19:19:22.879092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.921 [2024-07-24 19:19:22.879108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:16.921 [2024-07-24 19:19:22.879123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.921 [2024-07-24 19:19:22.879139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:16.921 [2024-07-24 19:19:22.879153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.921 [2024-07-24 19:19:22.879169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:16.921 [2024-07-24 19:19:22.879245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1548a80 (9): Bad file descriptor 00:22:16.921 [2024-07-24 19:19:22.880243] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:16.921 [2024-07-24 19:19:22.880267] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:16.921 19:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:16.921 19:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:16.921 19:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:16.921 19:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.921 19:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:16.921 19:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:16.921 19:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:16.921 19:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.182 19:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:17.182 19:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.182 19:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.182 19:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:17.182 19:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:17.182 19:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:17.182 19:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:17.182 19:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.182 19:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:17.182 19:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:17.182 19:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:17.182 19:19:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.182 19:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:17.182 19:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:18.120 19:19:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:18.120 19:19:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:18.120 19:19:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:18.120 19:19:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.120 19:19:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:18.120 19:19:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:18.120 19:19:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:18.120 19:19:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.120 19:19:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:18.120 19:19:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:19.057 [2024-07-24 19:19:24.894378] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:19.057 [2024-07-24 19:19:24.894416] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:19.057 [2024-07-24 19:19:24.894443] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:19.057 [2024-07-24 19:19:24.981701] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:19.057 19:19:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:19.058 19:19:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:19.058 19:19:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:19.058 19:19:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.058 19:19:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:19.058 19:19:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:19.058 19:19:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:19.058 19:19:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.317 19:19:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:19.317 19:19:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:19.317 [2024-07-24 19:19:25.166808] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:19.317 [2024-07-24 19:19:25.166864] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:19.317 [2024-07-24 19:19:25.166902] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:19.317 [2024-07-24 19:19:25.166933] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:19.317 [2024-07-24 19:19:25.166948] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:19.317 [2024-07-24 19:19:25.172690] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1550250 was disconnected and freed. delete nvme_qpair. 00:22:20.257 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:20.257 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:20.257 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:20.257 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.257 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:20.257 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:20.257 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:20.257 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.257 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:20.257 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:20.257 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2623183 00:22:20.257 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2623183 ']' 00:22:20.257 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2623183 00:22:20.257 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:22:20.257 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:20.257 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2623183 00:22:20.257 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:20.257 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:20.257 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2623183' 00:22:20.257 killing process with pid 2623183 00:22:20.257 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2623183 00:22:20.257 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2623183 00:22:20.517 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:20.517 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:20.517 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:20.517 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:20.517 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:20.517 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:20.517 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:20.517 rmmod nvme_tcp 00:22:20.517 rmmod nvme_fabrics 00:22:20.517 rmmod nvme_keyring 00:22:20.517 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:20.517 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:20.517 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:20.517 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2623161 ']' 00:22:20.517 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2623161 00:22:20.517 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2623161 ']' 00:22:20.517 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2623161 00:22:20.517 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:22:20.517 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:20.517 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2623161 00:22:20.517 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:20.517 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:20.517 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2623161' 00:22:20.517 killing process with pid 2623161 00:22:20.517 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2623161 00:22:20.517 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2623161 00:22:20.777 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:20.777 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:20.777 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:20.777 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:20.777 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:20.777 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.777 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.777 19:19:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.315 19:19:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:23.315 00:22:23.315 real 0m17.434s 00:22:23.315 user 0m25.945s 00:22:23.315 sys 0m2.634s 00:22:23.315 19:19:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:23.315 19:19:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:23.315 ************************************ 00:22:23.315 END TEST nvmf_discovery_remove_ifc 00:22:23.315 ************************************ 00:22:23.315 19:19:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:23.315 19:19:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:23.315 19:19:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:23.315 19:19:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.315 ************************************ 00:22:23.315 START TEST nvmf_identify_kernel_target 00:22:23.315 ************************************ 00:22:23.315 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:23.315 * Looking for test storage... 00:22:23.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:23.315 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.315 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:23.315 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:23.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:22:24.697 Found 0000:08:00.0 (0x8086 - 0x159b) 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:22:24.697 Found 0000:08:00.1 (0x8086 - 0x159b) 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:22:24.697 Found net devices under 0000:08:00.0: cvl_0_0 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:22:24.697 Found net devices under 0000:08:00.1: cvl_0_1 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:24.697 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:24.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:24.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:22:24.698 00:22:24.698 --- 10.0.0.2 ping statistics --- 00:22:24.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.698 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:24.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:24.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:22:24.698 00:22:24.698 --- 10.0.0.1 ping statistics --- 00:22:24.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.698 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:24.698 19:19:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:25.634 Waiting for block devices as requested 00:22:25.634 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:22:25.894 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:22:25.894 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:22:25.894 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:22:25.894 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:22:26.155 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:22:26.155 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:22:26.155 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:22:26.155 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:22:26.415 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:22:26.415 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:22:26.415 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:22:26.674 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:22:26.674 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:22:26.674 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:22:26.674 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:22:26.933 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:26.933 No valid GPT data, bailing 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:22:26.933 00:22:26.933 Discovery Log Number of Records 2, Generation counter 2 00:22:26.933 =====Discovery Log Entry 0====== 00:22:26.933 trtype: tcp 00:22:26.933 adrfam: ipv4 00:22:26.933 subtype: current discovery subsystem 00:22:26.933 treq: not specified, sq flow control disable supported 00:22:26.933 portid: 1 00:22:26.933 trsvcid: 4420 00:22:26.933 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:26.933 traddr: 10.0.0.1 00:22:26.933 eflags: none 00:22:26.933 sectype: none 00:22:26.933 =====Discovery Log Entry 1====== 00:22:26.933 trtype: tcp 00:22:26.933 adrfam: ipv4 00:22:26.933 subtype: nvme subsystem 00:22:26.933 treq: not specified, sq flow control disable supported 00:22:26.933 portid: 1 00:22:26.933 trsvcid: 4420 00:22:26.933 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:26.933 traddr: 10.0.0.1 00:22:26.933 eflags: none 00:22:26.933 sectype: none 00:22:26.933 19:19:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:26.933 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:27.194 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.194 ===================================================== 00:22:27.194 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:27.194 ===================================================== 00:22:27.194 Controller Capabilities/Features 00:22:27.194 ================================ 00:22:27.194 Vendor ID: 0000 00:22:27.194 Subsystem Vendor ID: 0000 00:22:27.194 Serial Number: 4bd84e26cdb0193e19c1 00:22:27.194 Model Number: Linux 00:22:27.194 Firmware Version: 6.7.0-68 00:22:27.194 Recommended Arb Burst: 0 00:22:27.194 IEEE OUI Identifier: 00 00 00 00:22:27.194 Multi-path I/O 00:22:27.194 May have multiple subsystem ports: No 00:22:27.194 May have multiple controllers: No 00:22:27.194 Associated with SR-IOV VF: No 00:22:27.194 Max Data Transfer Size: Unlimited 00:22:27.194 Max Number of Namespaces: 0 00:22:27.194 Max Number of I/O Queues: 1024 00:22:27.194 NVMe Specification Version (VS): 1.3 00:22:27.194 NVMe Specification Version (Identify): 1.3 00:22:27.194 Maximum Queue Entries: 1024 00:22:27.194 Contiguous Queues Required: No 00:22:27.194 Arbitration Mechanisms Supported 00:22:27.194 Weighted Round Robin: Not Supported 00:22:27.194 Vendor Specific: Not Supported 00:22:27.194 Reset Timeout: 7500 ms 00:22:27.194 Doorbell Stride: 4 bytes 00:22:27.194 NVM Subsystem Reset: Not Supported 00:22:27.194 Command Sets Supported 00:22:27.194 NVM Command Set: Supported 00:22:27.194 Boot Partition: Not Supported 00:22:27.194 Memory Page Size Minimum: 4096 bytes 00:22:27.194 Memory Page Size Maximum: 4096 bytes 00:22:27.194 Persistent Memory Region: Not Supported 00:22:27.194 Optional Asynchronous Events Supported 00:22:27.194 Namespace Attribute Notices: Not Supported 00:22:27.194 Firmware Activation Notices: Not Supported 00:22:27.194 ANA Change Notices: Not Supported 00:22:27.194 PLE Aggregate Log Change Notices: Not Supported 00:22:27.194 LBA Status Info Alert Notices: Not Supported 00:22:27.194 EGE Aggregate Log Change Notices: Not Supported 00:22:27.194 Normal NVM Subsystem Shutdown event: Not Supported 00:22:27.194 Zone Descriptor Change Notices: Not Supported 00:22:27.194 Discovery Log Change Notices: Supported 00:22:27.194 Controller Attributes 00:22:27.194 128-bit Host Identifier: Not Supported 00:22:27.194 Non-Operational Permissive Mode: Not Supported 00:22:27.194 NVM Sets: Not Supported 00:22:27.194 Read Recovery Levels: Not Supported 00:22:27.194 Endurance Groups: Not Supported 00:22:27.194 Predictable Latency Mode: Not Supported 00:22:27.194 Traffic Based Keep ALive: Not Supported 00:22:27.194 Namespace Granularity: Not Supported 00:22:27.194 SQ Associations: Not Supported 00:22:27.194 UUID List: Not Supported 00:22:27.194 Multi-Domain Subsystem: Not Supported 00:22:27.194 Fixed Capacity Management: Not Supported 00:22:27.194 Variable Capacity Management: Not Supported 00:22:27.194 Delete Endurance Group: Not Supported 00:22:27.194 Delete NVM Set: Not Supported 00:22:27.194 Extended LBA Formats Supported: Not Supported 00:22:27.194 Flexible Data Placement Supported: Not Supported 00:22:27.194 00:22:27.194 Controller Memory Buffer Support 00:22:27.194 ================================ 00:22:27.194 Supported: No 00:22:27.194 00:22:27.194 Persistent Memory Region Support 00:22:27.194 ================================ 00:22:27.194 Supported: No 00:22:27.194 00:22:27.194 Admin Command Set Attributes 00:22:27.194 ============================ 00:22:27.194 Security Send/Receive: Not Supported 00:22:27.194 Format NVM: Not Supported 00:22:27.194 Firmware Activate/Download: Not Supported 00:22:27.194 Namespace Management: Not Supported 00:22:27.194 Device Self-Test: Not Supported 00:22:27.194 Directives: Not Supported 00:22:27.194 NVMe-MI: Not Supported 00:22:27.194 Virtualization Management: Not Supported 00:22:27.194 Doorbell Buffer Config: Not Supported 00:22:27.194 Get LBA Status Capability: Not Supported 00:22:27.194 Command & Feature Lockdown Capability: Not Supported 00:22:27.194 Abort Command Limit: 1 00:22:27.194 Async Event Request Limit: 1 00:22:27.194 Number of Firmware Slots: N/A 00:22:27.194 Firmware Slot 1 Read-Only: N/A 00:22:27.194 Firmware Activation Without Reset: N/A 00:22:27.194 Multiple Update Detection Support: N/A 00:22:27.194 Firmware Update Granularity: No Information Provided 00:22:27.194 Per-Namespace SMART Log: No 00:22:27.194 Asymmetric Namespace Access Log Page: Not Supported 00:22:27.194 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:27.194 Command Effects Log Page: Not Supported 00:22:27.194 Get Log Page Extended Data: Supported 00:22:27.194 Telemetry Log Pages: Not Supported 00:22:27.194 Persistent Event Log Pages: Not Supported 00:22:27.194 Supported Log Pages Log Page: May Support 00:22:27.194 Commands Supported & Effects Log Page: Not Supported 00:22:27.194 Feature Identifiers & Effects Log Page:May Support 00:22:27.194 NVMe-MI Commands & Effects Log Page: May Support 00:22:27.194 Data Area 4 for Telemetry Log: Not Supported 00:22:27.194 Error Log Page Entries Supported: 1 00:22:27.194 Keep Alive: Not Supported 00:22:27.194 00:22:27.194 NVM Command Set Attributes 00:22:27.194 ========================== 00:22:27.194 Submission Queue Entry Size 00:22:27.194 Max: 1 00:22:27.194 Min: 1 00:22:27.194 Completion Queue Entry Size 00:22:27.194 Max: 1 00:22:27.194 Min: 1 00:22:27.194 Number of Namespaces: 0 00:22:27.195 Compare Command: Not Supported 00:22:27.195 Write Uncorrectable Command: Not Supported 00:22:27.195 Dataset Management Command: Not Supported 00:22:27.195 Write Zeroes Command: Not Supported 00:22:27.195 Set Features Save Field: Not Supported 00:22:27.195 Reservations: Not Supported 00:22:27.195 Timestamp: Not Supported 00:22:27.195 Copy: Not Supported 00:22:27.195 Volatile Write Cache: Not Present 00:22:27.195 Atomic Write Unit (Normal): 1 00:22:27.195 Atomic Write Unit (PFail): 1 00:22:27.195 Atomic Compare & Write Unit: 1 00:22:27.195 Fused Compare & Write: Not Supported 00:22:27.195 Scatter-Gather List 00:22:27.195 SGL Command Set: Supported 00:22:27.195 SGL Keyed: Not Supported 00:22:27.195 SGL Bit Bucket Descriptor: Not Supported 00:22:27.195 SGL Metadata Pointer: Not Supported 00:22:27.195 Oversized SGL: Not Supported 00:22:27.195 SGL Metadata Address: Not Supported 00:22:27.195 SGL Offset: Supported 00:22:27.195 Transport SGL Data Block: Not Supported 00:22:27.195 Replay Protected Memory Block: Not Supported 00:22:27.195 00:22:27.195 Firmware Slot Information 00:22:27.195 ========================= 00:22:27.195 Active slot: 0 00:22:27.195 00:22:27.195 00:22:27.195 Error Log 00:22:27.195 ========= 00:22:27.195 00:22:27.195 Active Namespaces 00:22:27.195 ================= 00:22:27.195 Discovery Log Page 00:22:27.195 ================== 00:22:27.195 Generation Counter: 2 00:22:27.195 Number of Records: 2 00:22:27.195 Record Format: 0 00:22:27.195 00:22:27.195 Discovery Log Entry 0 00:22:27.195 ---------------------- 00:22:27.195 Transport Type: 3 (TCP) 00:22:27.195 Address Family: 1 (IPv4) 00:22:27.195 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:27.195 Entry Flags: 00:22:27.195 Duplicate Returned Information: 0 00:22:27.195 Explicit Persistent Connection Support for Discovery: 0 00:22:27.195 Transport Requirements: 00:22:27.195 Secure Channel: Not Specified 00:22:27.195 Port ID: 1 (0x0001) 00:22:27.195 Controller ID: 65535 (0xffff) 00:22:27.195 Admin Max SQ Size: 32 00:22:27.195 Transport Service Identifier: 4420 00:22:27.195 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:27.195 Transport Address: 10.0.0.1 00:22:27.195 Discovery Log Entry 1 00:22:27.195 ---------------------- 00:22:27.195 Transport Type: 3 (TCP) 00:22:27.195 Address Family: 1 (IPv4) 00:22:27.195 Subsystem Type: 2 (NVM Subsystem) 00:22:27.195 Entry Flags: 00:22:27.195 Duplicate Returned Information: 0 00:22:27.195 Explicit Persistent Connection Support for Discovery: 0 00:22:27.195 Transport Requirements: 00:22:27.195 Secure Channel: Not Specified 00:22:27.195 Port ID: 1 (0x0001) 00:22:27.195 Controller ID: 65535 (0xffff) 00:22:27.195 Admin Max SQ Size: 32 00:22:27.195 Transport Service Identifier: 4420 00:22:27.195 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:27.195 Transport Address: 10.0.0.1 00:22:27.195 19:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:27.195 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.195 get_feature(0x01) failed 00:22:27.195 get_feature(0x02) failed 00:22:27.195 get_feature(0x04) failed 00:22:27.195 ===================================================== 00:22:27.195 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:27.195 ===================================================== 00:22:27.195 Controller Capabilities/Features 00:22:27.195 ================================ 00:22:27.195 Vendor ID: 0000 00:22:27.195 Subsystem Vendor ID: 0000 00:22:27.195 Serial Number: b6620bd03bb9e67a1700 00:22:27.195 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:27.195 Firmware Version: 6.7.0-68 00:22:27.195 Recommended Arb Burst: 6 00:22:27.195 IEEE OUI Identifier: 00 00 00 00:22:27.195 Multi-path I/O 00:22:27.195 May have multiple subsystem ports: Yes 00:22:27.195 May have multiple controllers: Yes 00:22:27.195 Associated with SR-IOV VF: No 00:22:27.195 Max Data Transfer Size: Unlimited 00:22:27.195 Max Number of Namespaces: 1024 00:22:27.195 Max Number of I/O Queues: 128 00:22:27.195 NVMe Specification Version (VS): 1.3 00:22:27.195 NVMe Specification Version (Identify): 1.3 00:22:27.195 Maximum Queue Entries: 1024 00:22:27.195 Contiguous Queues Required: No 00:22:27.195 Arbitration Mechanisms Supported 00:22:27.195 Weighted Round Robin: Not Supported 00:22:27.195 Vendor Specific: Not Supported 00:22:27.195 Reset Timeout: 7500 ms 00:22:27.195 Doorbell Stride: 4 bytes 00:22:27.195 NVM Subsystem Reset: Not Supported 00:22:27.195 Command Sets Supported 00:22:27.195 NVM Command Set: Supported 00:22:27.195 Boot Partition: Not Supported 00:22:27.195 Memory Page Size Minimum: 4096 bytes 00:22:27.195 Memory Page Size Maximum: 4096 bytes 00:22:27.195 Persistent Memory Region: Not Supported 00:22:27.195 Optional Asynchronous Events Supported 00:22:27.195 Namespace Attribute Notices: Supported 00:22:27.195 Firmware Activation Notices: Not Supported 00:22:27.195 ANA Change Notices: Supported 00:22:27.195 PLE Aggregate Log Change Notices: Not Supported 00:22:27.195 LBA Status Info Alert Notices: Not Supported 00:22:27.195 EGE Aggregate Log Change Notices: Not Supported 00:22:27.195 Normal NVM Subsystem Shutdown event: Not Supported 00:22:27.195 Zone Descriptor Change Notices: Not Supported 00:22:27.195 Discovery Log Change Notices: Not Supported 00:22:27.195 Controller Attributes 00:22:27.195 128-bit Host Identifier: Supported 00:22:27.195 Non-Operational Permissive Mode: Not Supported 00:22:27.195 NVM Sets: Not Supported 00:22:27.195 Read Recovery Levels: Not Supported 00:22:27.195 Endurance Groups: Not Supported 00:22:27.195 Predictable Latency Mode: Not Supported 00:22:27.195 Traffic Based Keep ALive: Supported 00:22:27.195 Namespace Granularity: Not Supported 00:22:27.195 SQ Associations: Not Supported 00:22:27.195 UUID List: Not Supported 00:22:27.195 Multi-Domain Subsystem: Not Supported 00:22:27.195 Fixed Capacity Management: Not Supported 00:22:27.195 Variable Capacity Management: Not Supported 00:22:27.195 Delete Endurance Group: Not Supported 00:22:27.195 Delete NVM Set: Not Supported 00:22:27.195 Extended LBA Formats Supported: Not Supported 00:22:27.195 Flexible Data Placement Supported: Not Supported 00:22:27.195 00:22:27.195 Controller Memory Buffer Support 00:22:27.195 ================================ 00:22:27.195 Supported: No 00:22:27.195 00:22:27.195 Persistent Memory Region Support 00:22:27.195 ================================ 00:22:27.195 Supported: No 00:22:27.195 00:22:27.195 Admin Command Set Attributes 00:22:27.195 ============================ 00:22:27.195 Security Send/Receive: Not Supported 00:22:27.195 Format NVM: Not Supported 00:22:27.195 Firmware Activate/Download: Not Supported 00:22:27.195 Namespace Management: Not Supported 00:22:27.195 Device Self-Test: Not Supported 00:22:27.195 Directives: Not Supported 00:22:27.195 NVMe-MI: Not Supported 00:22:27.195 Virtualization Management: Not Supported 00:22:27.195 Doorbell Buffer Config: Not Supported 00:22:27.195 Get LBA Status Capability: Not Supported 00:22:27.195 Command & Feature Lockdown Capability: Not Supported 00:22:27.195 Abort Command Limit: 4 00:22:27.195 Async Event Request Limit: 4 00:22:27.195 Number of Firmware Slots: N/A 00:22:27.195 Firmware Slot 1 Read-Only: N/A 00:22:27.195 Firmware Activation Without Reset: N/A 00:22:27.195 Multiple Update Detection Support: N/A 00:22:27.195 Firmware Update Granularity: No Information Provided 00:22:27.195 Per-Namespace SMART Log: Yes 00:22:27.195 Asymmetric Namespace Access Log Page: Supported 00:22:27.195 ANA Transition Time : 10 sec 00:22:27.195 00:22:27.195 Asymmetric Namespace Access Capabilities 00:22:27.195 ANA Optimized State : Supported 00:22:27.195 ANA Non-Optimized State : Supported 00:22:27.195 ANA Inaccessible State : Supported 00:22:27.195 ANA Persistent Loss State : Supported 00:22:27.195 ANA Change State : Supported 00:22:27.195 ANAGRPID is not changed : No 00:22:27.195 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:27.195 00:22:27.196 ANA Group Identifier Maximum : 128 00:22:27.196 Number of ANA Group Identifiers : 128 00:22:27.196 Max Number of Allowed Namespaces : 1024 00:22:27.196 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:27.196 Command Effects Log Page: Supported 00:22:27.196 Get Log Page Extended Data: Supported 00:22:27.196 Telemetry Log Pages: Not Supported 00:22:27.196 Persistent Event Log Pages: Not Supported 00:22:27.196 Supported Log Pages Log Page: May Support 00:22:27.196 Commands Supported & Effects Log Page: Not Supported 00:22:27.196 Feature Identifiers & Effects Log Page:May Support 00:22:27.196 NVMe-MI Commands & Effects Log Page: May Support 00:22:27.196 Data Area 4 for Telemetry Log: Not Supported 00:22:27.196 Error Log Page Entries Supported: 128 00:22:27.196 Keep Alive: Supported 00:22:27.196 Keep Alive Granularity: 1000 ms 00:22:27.196 00:22:27.196 NVM Command Set Attributes 00:22:27.196 ========================== 00:22:27.196 Submission Queue Entry Size 00:22:27.196 Max: 64 00:22:27.196 Min: 64 00:22:27.196 Completion Queue Entry Size 00:22:27.196 Max: 16 00:22:27.196 Min: 16 00:22:27.196 Number of Namespaces: 1024 00:22:27.196 Compare Command: Not Supported 00:22:27.196 Write Uncorrectable Command: Not Supported 00:22:27.196 Dataset Management Command: Supported 00:22:27.196 Write Zeroes Command: Supported 00:22:27.196 Set Features Save Field: Not Supported 00:22:27.196 Reservations: Not Supported 00:22:27.196 Timestamp: Not Supported 00:22:27.196 Copy: Not Supported 00:22:27.196 Volatile Write Cache: Present 00:22:27.196 Atomic Write Unit (Normal): 1 00:22:27.196 Atomic Write Unit (PFail): 1 00:22:27.196 Atomic Compare & Write Unit: 1 00:22:27.196 Fused Compare & Write: Not Supported 00:22:27.196 Scatter-Gather List 00:22:27.196 SGL Command Set: Supported 00:22:27.196 SGL Keyed: Not Supported 00:22:27.196 SGL Bit Bucket Descriptor: Not Supported 00:22:27.196 SGL Metadata Pointer: Not Supported 00:22:27.196 Oversized SGL: Not Supported 00:22:27.196 SGL Metadata Address: Not Supported 00:22:27.196 SGL Offset: Supported 00:22:27.196 Transport SGL Data Block: Not Supported 00:22:27.196 Replay Protected Memory Block: Not Supported 00:22:27.196 00:22:27.196 Firmware Slot Information 00:22:27.196 ========================= 00:22:27.196 Active slot: 0 00:22:27.196 00:22:27.196 Asymmetric Namespace Access 00:22:27.196 =========================== 00:22:27.196 Change Count : 0 00:22:27.196 Number of ANA Group Descriptors : 1 00:22:27.196 ANA Group Descriptor : 0 00:22:27.196 ANA Group ID : 1 00:22:27.196 Number of NSID Values : 1 00:22:27.196 Change Count : 0 00:22:27.196 ANA State : 1 00:22:27.196 Namespace Identifier : 1 00:22:27.196 00:22:27.196 Commands Supported and Effects 00:22:27.196 ============================== 00:22:27.196 Admin Commands 00:22:27.196 -------------- 00:22:27.196 Get Log Page (02h): Supported 00:22:27.196 Identify (06h): Supported 00:22:27.196 Abort (08h): Supported 00:22:27.196 Set Features (09h): Supported 00:22:27.196 Get Features (0Ah): Supported 00:22:27.196 Asynchronous Event Request (0Ch): Supported 00:22:27.196 Keep Alive (18h): Supported 00:22:27.196 I/O Commands 00:22:27.196 ------------ 00:22:27.196 Flush (00h): Supported 00:22:27.196 Write (01h): Supported LBA-Change 00:22:27.196 Read (02h): Supported 00:22:27.196 Write Zeroes (08h): Supported LBA-Change 00:22:27.196 Dataset Management (09h): Supported 00:22:27.196 00:22:27.196 Error Log 00:22:27.196 ========= 00:22:27.196 Entry: 0 00:22:27.196 Error Count: 0x3 00:22:27.196 Submission Queue Id: 0x0 00:22:27.196 Command Id: 0x5 00:22:27.196 Phase Bit: 0 00:22:27.196 Status Code: 0x2 00:22:27.196 Status Code Type: 0x0 00:22:27.196 Do Not Retry: 1 00:22:27.196 Error Location: 0x28 00:22:27.196 LBA: 0x0 00:22:27.196 Namespace: 0x0 00:22:27.196 Vendor Log Page: 0x0 00:22:27.196 ----------- 00:22:27.196 Entry: 1 00:22:27.196 Error Count: 0x2 00:22:27.196 Submission Queue Id: 0x0 00:22:27.196 Command Id: 0x5 00:22:27.196 Phase Bit: 0 00:22:27.196 Status Code: 0x2 00:22:27.196 Status Code Type: 0x0 00:22:27.196 Do Not Retry: 1 00:22:27.196 Error Location: 0x28 00:22:27.196 LBA: 0x0 00:22:27.196 Namespace: 0x0 00:22:27.196 Vendor Log Page: 0x0 00:22:27.196 ----------- 00:22:27.196 Entry: 2 00:22:27.196 Error Count: 0x1 00:22:27.196 Submission Queue Id: 0x0 00:22:27.196 Command Id: 0x4 00:22:27.196 Phase Bit: 0 00:22:27.196 Status Code: 0x2 00:22:27.196 Status Code Type: 0x0 00:22:27.196 Do Not Retry: 1 00:22:27.196 Error Location: 0x28 00:22:27.196 LBA: 0x0 00:22:27.196 Namespace: 0x0 00:22:27.196 Vendor Log Page: 0x0 00:22:27.196 00:22:27.196 Number of Queues 00:22:27.196 ================ 00:22:27.196 Number of I/O Submission Queues: 128 00:22:27.196 Number of I/O Completion Queues: 128 00:22:27.196 00:22:27.196 ZNS Specific Controller Data 00:22:27.196 ============================ 00:22:27.196 Zone Append Size Limit: 0 00:22:27.196 00:22:27.196 00:22:27.196 Active Namespaces 00:22:27.196 ================= 00:22:27.196 get_feature(0x05) failed 00:22:27.196 Namespace ID:1 00:22:27.196 Command Set Identifier: NVM (00h) 00:22:27.196 Deallocate: Supported 00:22:27.196 Deallocated/Unwritten Error: Not Supported 00:22:27.196 Deallocated Read Value: Unknown 00:22:27.196 Deallocate in Write Zeroes: Not Supported 00:22:27.196 Deallocated Guard Field: 0xFFFF 00:22:27.196 Flush: Supported 00:22:27.196 Reservation: Not Supported 00:22:27.196 Namespace Sharing Capabilities: Multiple Controllers 00:22:27.196 Size (in LBAs): 1953525168 (931GiB) 00:22:27.196 Capacity (in LBAs): 1953525168 (931GiB) 00:22:27.196 Utilization (in LBAs): 1953525168 (931GiB) 00:22:27.196 UUID: f9aa5808-f75d-45bb-9fc2-ccd9e09797e2 00:22:27.196 Thin Provisioning: Not Supported 00:22:27.196 Per-NS Atomic Units: Yes 00:22:27.196 Atomic Boundary Size (Normal): 0 00:22:27.196 Atomic Boundary Size (PFail): 0 00:22:27.196 Atomic Boundary Offset: 0 00:22:27.196 NGUID/EUI64 Never Reused: No 00:22:27.196 ANA group ID: 1 00:22:27.196 Namespace Write Protected: No 00:22:27.196 Number of LBA Formats: 1 00:22:27.196 Current LBA Format: LBA Format #00 00:22:27.196 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:27.196 00:22:27.196 19:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:27.196 19:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:27.196 19:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:22:27.196 19:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:27.196 19:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:22:27.196 19:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:27.196 19:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:27.196 rmmod nvme_tcp 00:22:27.196 rmmod nvme_fabrics 00:22:27.196 19:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:27.196 19:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:22:27.196 19:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:22:27.196 19:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:27.196 19:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:27.196 19:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:27.196 19:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:27.196 19:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:27.196 19:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:27.196 19:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.196 19:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.196 19:19:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.748 19:19:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:29.748 19:19:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:29.748 19:19:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:29.748 19:19:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:22:29.748 19:19:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:29.748 19:19:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:29.748 19:19:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:29.748 19:19:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:29.748 19:19:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:29.748 19:19:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:22:29.748 19:19:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:30.324 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:22:30.587 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:22:30.587 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:22:30.587 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:22:30.587 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:22:30.587 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:22:30.587 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:22:30.587 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:22:30.587 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:22:30.587 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:22:30.587 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:22:30.587 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:22:30.587 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:22:30.587 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:22:30.587 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:22:30.587 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:22:31.527 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:22:31.527 00:22:31.527 real 0m8.654s 00:22:31.527 user 0m1.751s 00:22:31.527 sys 0m3.033s 00:22:31.527 19:19:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:31.527 19:19:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.527 ************************************ 00:22:31.527 END TEST nvmf_identify_kernel_target 00:22:31.527 ************************************ 00:22:31.527 19:19:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:31.527 19:19:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:31.527 19:19:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:31.527 19:19:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.527 ************************************ 00:22:31.527 START TEST nvmf_auth_host 00:22:31.527 ************************************ 00:22:31.527 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:31.786 * Looking for test storage... 00:22:31.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:31.786 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:31.787 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:31.787 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:31.787 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:31.787 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:31.787 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.787 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:31.787 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:31.787 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:31.787 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.787 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.787 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.787 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:31.787 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:31.787 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:31.787 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:22:33.694 Found 0000:08:00.0 (0x8086 - 0x159b) 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:22:33.694 Found 0000:08:00.1 (0x8086 - 0x159b) 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:22:33.694 Found net devices under 0000:08:00.0: cvl_0_0 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:22:33.694 Found net devices under 0000:08:00.1: cvl_0_1 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:33.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:22:33.694 00:22:33.694 --- 10.0.0.2 ping statistics --- 00:22:33.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.694 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:22:33.694 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:22:33.695 00:22:33.695 --- 10.0.0.1 ping statistics --- 00:22:33.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.695 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2628697 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2628697 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2628697 ']' 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0bbd322ef1fc7bf2aacc2902d9622bae 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Dud 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0bbd322ef1fc7bf2aacc2902d9622bae 0 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0bbd322ef1fc7bf2aacc2902d9622bae 0 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0bbd322ef1fc7bf2aacc2902d9622bae 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:33.695 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Dud 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Dud 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Dud 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=396dee6a937a0cf8ceb79fc6a729ea413f324003c26e2ba001bfeaf0207fdeb7 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.wxm 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 396dee6a937a0cf8ceb79fc6a729ea413f324003c26e2ba001bfeaf0207fdeb7 3 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 396dee6a937a0cf8ceb79fc6a729ea413f324003c26e2ba001bfeaf0207fdeb7 3 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=396dee6a937a0cf8ceb79fc6a729ea413f324003c26e2ba001bfeaf0207fdeb7 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.wxm 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.wxm 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.wxm 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=56f0a126800803f6f7a9e987db09f0cf97142fc888e184e6 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.HyB 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 56f0a126800803f6f7a9e987db09f0cf97142fc888e184e6 0 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 56f0a126800803f6f7a9e987db09f0cf97142fc888e184e6 0 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=56f0a126800803f6f7a9e987db09f0cf97142fc888e184e6 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.HyB 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.HyB 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.HyB 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0ecaa930431a57f5f61008f76b96e643929c79607f3b428d 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.sVD 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0ecaa930431a57f5f61008f76b96e643929c79607f3b428d 2 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0ecaa930431a57f5f61008f76b96e643929c79607f3b428d 2 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0ecaa930431a57f5f61008f76b96e643929c79607f3b428d 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.sVD 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.sVD 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.sVD 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5651b42556f8d2a97bbd5712531cde64 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Fea 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5651b42556f8d2a97bbd5712531cde64 1 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5651b42556f8d2a97bbd5712531cde64 1 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5651b42556f8d2a97bbd5712531cde64 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Fea 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Fea 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Fea 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:33.955 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=93598a84176ed7f5fdd1f0af11a496a6 00:22:33.956 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:33.956 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.216 00:22:33.956 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 93598a84176ed7f5fdd1f0af11a496a6 1 00:22:33.956 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 93598a84176ed7f5fdd1f0af11a496a6 1 00:22:33.956 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:33.956 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:33.956 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=93598a84176ed7f5fdd1f0af11a496a6 00:22:33.956 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:33.956 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:34.216 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.216 00:22:34.216 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.216 00:22:34.216 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.216 00:22:34.216 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:22:34.216 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:34.216 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:34.216 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:34.216 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:34.216 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:34.216 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:34.216 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ecb3989f6f05c0a1ce3cf5517c6e4278400a920fd92e70ee 00:22:34.216 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.C3l 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ecb3989f6f05c0a1ce3cf5517c6e4278400a920fd92e70ee 2 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ecb3989f6f05c0a1ce3cf5517c6e4278400a920fd92e70ee 2 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ecb3989f6f05c0a1ce3cf5517c6e4278400a920fd92e70ee 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.C3l 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.C3l 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.C3l 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d763eee8d2ca2004f664dcbda36fa86d 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.4DR 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d763eee8d2ca2004f664dcbda36fa86d 0 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d763eee8d2ca2004f664dcbda36fa86d 0 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d763eee8d2ca2004f664dcbda36fa86d 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.4DR 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.4DR 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.4DR 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0105f9f2eb441088ad1ee1f3c4330ef82d44f9d91b58b4f50f7b6e8d6992f42b 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.2zz 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0105f9f2eb441088ad1ee1f3c4330ef82d44f9d91b58b4f50f7b6e8d6992f42b 3 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0105f9f2eb441088ad1ee1f3c4330ef82d44f9d91b58b4f50f7b6e8d6992f42b 3 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0105f9f2eb441088ad1ee1f3c4330ef82d44f9d91b58b4f50f7b6e8d6992f42b 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.2zz 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.2zz 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.2zz 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2628697 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2628697 ']' 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:34.217 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.477 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:34.477 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:22:34.477 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:34.477 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Dud 00:22:34.477 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.477 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.477 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.477 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.wxm ]] 00:22:34.477 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wxm 00:22:34.477 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.477 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.HyB 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.sVD ]] 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.sVD 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Fea 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.216 ]] 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.216 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.C3l 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.4DR ]] 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.4DR 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.2zz 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:22:34.737 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:22:34.738 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:34.738 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:34.738 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:34.738 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.738 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.738 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:34.738 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.738 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:34.738 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:34.738 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:34.738 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:22:34.738 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:22:34.738 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:34.738 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:34.738 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:34.738 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:34.738 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:22:34.738 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:34.738 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:34.738 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:34.738 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:35.674 Waiting for block devices as requested 00:22:35.674 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:22:35.674 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:22:35.674 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:22:35.933 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:22:35.933 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:22:35.933 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:22:35.933 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:22:36.193 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:22:36.193 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:22:36.193 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:22:36.452 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:22:36.452 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:22:36.452 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:22:36.452 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:22:36.710 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:22:36.710 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:22:36.710 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:22:36.968 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:36.968 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:36.968 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:36.968 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:36.968 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:36.968 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:36.968 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:36.968 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:36.968 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:37.227 No valid GPT data, bailing 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:22:37.227 00:22:37.227 Discovery Log Number of Records 2, Generation counter 2 00:22:37.227 =====Discovery Log Entry 0====== 00:22:37.227 trtype: tcp 00:22:37.227 adrfam: ipv4 00:22:37.227 subtype: current discovery subsystem 00:22:37.227 treq: not specified, sq flow control disable supported 00:22:37.227 portid: 1 00:22:37.227 trsvcid: 4420 00:22:37.227 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:37.227 traddr: 10.0.0.1 00:22:37.227 eflags: none 00:22:37.227 sectype: none 00:22:37.227 =====Discovery Log Entry 1====== 00:22:37.227 trtype: tcp 00:22:37.227 adrfam: ipv4 00:22:37.227 subtype: nvme subsystem 00:22:37.227 treq: not specified, sq flow control disable supported 00:22:37.227 portid: 1 00:22:37.227 trsvcid: 4420 00:22:37.227 subnqn: nqn.2024-02.io.spdk:cnode0 00:22:37.227 traddr: 10.0.0.1 00:22:37.227 eflags: none 00:22:37.227 sectype: none 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: ]] 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.227 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.487 nvme0n1 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:22:37.487 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: ]] 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.488 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.747 nvme0n1 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: ]] 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.747 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.748 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:37.748 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:37.748 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:37.748 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:37.748 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:37.748 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.748 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.748 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.006 nvme0n1 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: ]] 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.006 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.265 nvme0n1 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: ]] 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:38.265 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.266 nvme0n1 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.266 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.525 nvme0n1 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.525 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: ]] 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.825 nvme0n1 00:22:38.825 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.826 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.826 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:38.826 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.826 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.826 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: ]] 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.138 19:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.138 nvme0n1 00:22:39.138 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.138 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.138 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: ]] 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.139 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.423 nvme0n1 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: ]] 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:39.423 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.424 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.698 nvme0n1 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.698 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.958 nvme0n1 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: ]] 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.958 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.527 nvme0n1 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: ]] 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.527 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.528 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.528 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:40.528 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:40.528 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:40.528 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:40.528 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.528 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.528 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:40.528 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.528 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:40.528 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:40.528 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:40.528 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.528 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.528 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.788 nvme0n1 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: ]] 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:40.788 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:40.789 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.789 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.789 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:40.789 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.789 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:40.789 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:40.789 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:40.789 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.789 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.789 19:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.048 nvme0n1 00:22:41.048 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.048 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.048 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.048 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.048 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.048 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.048 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.048 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.048 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.048 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.308 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.308 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.308 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:22:41.308 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.308 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:41.308 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:41.308 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: ]] 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.309 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.568 nvme0n1 00:22:41.568 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.568 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.568 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.568 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.568 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.568 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.568 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.568 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.568 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.568 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.568 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.568 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.568 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:22:41.568 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.568 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:41.568 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:41.568 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:41.568 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:22:41.568 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:41.568 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:41.568 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.569 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.828 nvme0n1 00:22:41.828 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.828 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.828 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.828 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.828 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.828 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: ]] 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.088 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:42.089 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.089 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.089 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.089 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.089 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:42.089 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:42.089 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:42.089 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.089 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.089 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:42.089 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.089 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:42.089 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:42.089 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:42.089 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.089 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.089 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.658 nvme0n1 00:22:42.658 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.658 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.658 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.658 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.658 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.658 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: ]] 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.659 19:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.229 nvme0n1 00:22:43.229 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.229 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.229 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.229 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:43.229 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.229 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.229 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.229 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.229 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.229 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: ]] 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.488 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.058 nvme0n1 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: ]] 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.058 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.059 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:44.059 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:44.059 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:44.059 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.059 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.059 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:44.059 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.059 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:44.059 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:44.059 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:44.059 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:44.059 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.059 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.629 nvme0n1 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:44.629 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.630 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:44.630 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.630 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.891 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.891 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.891 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:44.891 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:44.891 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:44.891 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.891 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.891 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:44.891 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.891 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:44.891 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:44.891 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:44.891 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:44.891 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.891 19:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.463 nvme0n1 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: ]] 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.463 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.846 nvme0n1 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: ]] 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:46.846 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:46.847 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:46.847 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.847 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.847 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:46.847 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.847 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:46.847 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:46.847 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:46.847 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.847 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.847 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.788 nvme0n1 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: ]] 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.788 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.166 nvme0n1 00:22:49.166 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: ]] 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.167 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.102 nvme0n1 00:22:50.102 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.102 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.102 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.102 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:50.102 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.102 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.103 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.362 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.362 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:50.362 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:50.362 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:50.362 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:50.362 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.362 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.362 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:50.362 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:50.362 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:50.362 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:50.362 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:50.362 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:50.362 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.362 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.303 nvme0n1 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: ]] 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.303 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.564 nvme0n1 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: ]] 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:51.564 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.565 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.825 nvme0n1 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: ]] 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.825 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.085 nvme0n1 00:22:52.085 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.085 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:52.085 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:52.085 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.085 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.085 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.085 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.085 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:52.085 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.085 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.085 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.085 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:52.085 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:22:52.085 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:52.085 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:52.085 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:52.085 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: ]] 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.086 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.346 nvme0n1 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:52.346 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.347 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.347 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:52.347 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:52.347 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:52.347 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:52.347 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:52.347 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:52.347 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.347 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.606 nvme0n1 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: ]] 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.606 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.865 nvme0n1 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: ]] 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.865 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.125 nvme0n1 00:22:53.125 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.125 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.125 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.125 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:53.125 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.125 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.125 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.125 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.125 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.125 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.125 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.125 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:53.125 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:22:53.125 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:53.125 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:53.125 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:53.125 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:53.125 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:22:53.125 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:22:53.125 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:53.125 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: ]] 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.126 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.386 nvme0n1 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: ]] 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.386 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.645 nvme0n1 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.645 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.905 nvme0n1 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: ]] 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.905 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.474 nvme0n1 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: ]] 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:54.474 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:54.475 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:54.475 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:54.475 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.475 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.475 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.475 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:54.475 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:54.475 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:54.475 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:54.475 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:54.475 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:54.475 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:54.475 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:54.475 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:54.475 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:54.475 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:54.475 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:54.475 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.475 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.734 nvme0n1 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: ]] 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.734 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.300 nvme0n1 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: ]] 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.300 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.560 nvme0n1 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.560 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.820 nvme0n1 00:22:55.820 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.820 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:55.820 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:55.820 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.820 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.820 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.820 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.820 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:55.820 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.820 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: ]] 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.081 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.648 nvme0n1 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: ]] 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.648 19:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.219 nvme0n1 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: ]] 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:57.219 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:57.220 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:57.220 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:57.220 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:57.220 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.220 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.220 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.158 nvme0n1 00:22:58.158 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.158 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:58.158 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.158 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.158 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.158 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.158 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.158 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.158 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.158 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.158 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.158 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:58.158 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:22:58.158 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:58.158 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:58.158 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:58.158 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:58.158 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: ]] 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.159 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.726 nvme0n1 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.726 19:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.292 nvme0n1 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: ]] 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.293 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.669 nvme0n1 00:23:00.669 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.669 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.669 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.669 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.669 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: ]] 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.670 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.604 nvme0n1 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: ]] 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:23:01.604 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.863 19:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.268 nvme0n1 00:23:03.268 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.268 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.268 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.268 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.268 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.268 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.268 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.268 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.268 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.268 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.268 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.268 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: ]] 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.269 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.228 nvme0n1 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.228 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.602 nvme0n1 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: ]] 00:23:05.602 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.603 nvme0n1 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: ]] 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.603 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.862 nvme0n1 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: ]] 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.862 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.121 nvme0n1 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: ]] 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.121 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.121 nvme0n1 00:23:06.121 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.121 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.121 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.121 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.121 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.121 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.380 nvme0n1 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.380 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: ]] 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.639 nvme0n1 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.639 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: ]] 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.898 nvme0n1 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.898 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.156 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: ]] 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.157 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.157 nvme0n1 00:23:07.157 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: ]] 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.415 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:07.416 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.416 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.416 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.416 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.416 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.416 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.416 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.416 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.416 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.416 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.416 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.416 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.416 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.416 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.416 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:07.416 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.416 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.673 nvme0n1 00:23:07.673 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.673 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.673 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.674 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.932 nvme0n1 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: ]] 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.932 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.933 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.933 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.933 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.933 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.933 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.933 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.933 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.933 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.933 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.191 nvme0n1 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: ]] 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.191 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.757 nvme0n1 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: ]] 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:08.757 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:08.758 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:08.758 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.758 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:08.758 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.758 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.758 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.758 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.758 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.758 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.758 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.758 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.758 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.758 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.758 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.758 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.758 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.758 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.758 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:08.758 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.758 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.016 nvme0n1 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: ]] 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.016 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.582 nvme0n1 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.582 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.583 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.583 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.583 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.583 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.583 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.583 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.583 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:09.583 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.583 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.841 nvme0n1 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: ]] 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.841 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.407 nvme0n1 00:23:10.407 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.407 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.407 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.407 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.407 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.407 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.407 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.407 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.407 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.407 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: ]] 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.666 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.232 nvme0n1 00:23:11.232 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.232 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.232 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.232 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.232 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.232 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.232 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.232 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.232 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.232 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.232 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.232 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.232 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: ]] 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.233 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.799 nvme0n1 00:23:11.799 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.799 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.799 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.799 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.799 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.799 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.799 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.799 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.799 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.799 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.799 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.799 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.799 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:11.799 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.799 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:11.800 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:11.800 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:11.800 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:23:11.800 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:23:11.800 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:11.800 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:11.800 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:23:11.800 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: ]] 00:23:11.800 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:23:11.800 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:11.800 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.800 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:11.800 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:11.800 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:11.800 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.800 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:11.800 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.800 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.058 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.058 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.058 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.058 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.058 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.058 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.058 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.058 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.058 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.058 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.058 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.058 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.058 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:12.058 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.058 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.624 nvme0n1 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.624 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.190 nvme0n1 00:23:13.190 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.190 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.190 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.190 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.190 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.190 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.190 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.190 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.190 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.190 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.448 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.448 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:13.448 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.448 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:13.448 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.448 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:13.448 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:13.448 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:13.448 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:23:13.448 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:23:13.448 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:13.448 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:13.448 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGJiZDMyMmVmMWZjN2JmMmFhY2MyOTAyZDk2MjJiYWU9O1os: 00:23:13.448 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: ]] 00:23:13.448 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzk2ZGVlNmE5MzdhMGNmOGNlYjc5ZmM2YTcyOWVhNDEzZjMyNDAwM2MyNmUyYmEwMDFiZmVhZjAyMDdmZGViNzCKjhg=: 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.449 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.384 nvme0n1 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: ]] 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.384 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.642 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.643 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.643 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.643 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.643 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.643 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.643 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.643 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.643 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.643 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.643 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.643 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.643 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:14.643 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.643 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.577 nvme0n1 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY1MWI0MjU1NmY4ZDJhOTdiYmQ1NzEyNTMxY2RlNjQ+L0ol: 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: ]] 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTM1OThhODQxNzZlZDdmNWZkZDFmMGFmMTFhNDk2YTYoDhsL: 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.577 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.835 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.835 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.835 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.835 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.835 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.835 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.835 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.835 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.835 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.835 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.835 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.835 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.835 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:15.835 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.835 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.769 nvme0n1 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNiMzk4OWY2ZjA1YzBhMWNlM2NmNTUxN2M2ZTQyNzg0MDBhOTIwZmQ5MmU3MGVlG7j2Fw==: 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: ]] 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc2M2VlZThkMmNhMjAwNGY2NjRkY2JkYTM2ZmE4NmRQ5LPt: 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:16.769 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:17.027 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:17.027 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.027 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:17.027 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.027 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.027 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.027 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.027 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.027 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.027 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.027 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.027 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.027 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.027 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.027 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.027 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.027 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.027 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:17.027 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.027 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.958 nvme0n1 00:23:17.958 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.959 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.959 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.959 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.959 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.959 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.959 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.959 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.959 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.959 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDEwNWY5ZjJlYjQ0MTA4OGFkMWVlMWYzYzQzMzBlZjgyZDQ0ZjlkOTFiNThiNGY1MGY3YjZlOGQ2OTkyZjQyYhZpLL0=: 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.216 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.217 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.217 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.217 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.217 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.217 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.217 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.217 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:18.217 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.217 19:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.150 nvme0n1 00:23:19.150 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.150 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.150 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.150 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.150 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.150 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.150 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.150 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.150 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.150 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.408 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.408 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:19.408 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.408 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:19.408 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmMGExMjY4MDA4MDNmNmY3YTllOTg3ZGIwOWYwY2Y5NzE0MmZjODg4ZTE4NGU2zGmXwQ==: 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: ]] 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGVjYWE5MzA0MzFhNTdmNWY2MTAwOGY3NmI5NmU2NDM5MjljNzk2MDdmM2I0MjhkavBiPw==: 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.409 request: 00:23:19.409 { 00:23:19.409 "name": "nvme0", 00:23:19.409 "trtype": "tcp", 00:23:19.409 "traddr": "10.0.0.1", 00:23:19.409 "adrfam": "ipv4", 00:23:19.409 "trsvcid": "4420", 00:23:19.409 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:19.409 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:19.409 "prchk_reftag": false, 00:23:19.409 "prchk_guard": false, 00:23:19.409 "hdgst": false, 00:23:19.409 "ddgst": false, 00:23:19.409 "method": "bdev_nvme_attach_controller", 00:23:19.409 "req_id": 1 00:23:19.409 } 00:23:19.409 Got JSON-RPC error response 00:23:19.409 response: 00:23:19.409 { 00:23:19.409 "code": -5, 00:23:19.409 "message": "Input/output error" 00:23:19.409 } 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.409 request: 00:23:19.409 { 00:23:19.409 "name": "nvme0", 00:23:19.409 "trtype": "tcp", 00:23:19.409 "traddr": "10.0.0.1", 00:23:19.409 "adrfam": "ipv4", 00:23:19.409 "trsvcid": "4420", 00:23:19.409 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:19.409 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:19.409 "prchk_reftag": false, 00:23:19.409 "prchk_guard": false, 00:23:19.409 "hdgst": false, 00:23:19.409 "ddgst": false, 00:23:19.409 "dhchap_key": "key2", 00:23:19.409 "method": "bdev_nvme_attach_controller", 00:23:19.409 "req_id": 1 00:23:19.409 } 00:23:19.409 Got JSON-RPC error response 00:23:19.409 response: 00:23:19.409 { 00:23:19.409 "code": -5, 00:23:19.409 "message": "Input/output error" 00:23:19.409 } 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.409 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.668 request: 00:23:19.668 { 00:23:19.668 "name": "nvme0", 00:23:19.668 "trtype": "tcp", 00:23:19.668 "traddr": "10.0.0.1", 00:23:19.668 "adrfam": "ipv4", 00:23:19.668 "trsvcid": "4420", 00:23:19.668 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:19.668 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:19.668 "prchk_reftag": false, 00:23:19.668 "prchk_guard": false, 00:23:19.668 "hdgst": false, 00:23:19.668 "ddgst": false, 00:23:19.668 "dhchap_key": "key1", 00:23:19.668 "dhchap_ctrlr_key": "ckey2", 00:23:19.668 "method": "bdev_nvme_attach_controller", 00:23:19.668 "req_id": 1 00:23:19.668 } 00:23:19.668 Got JSON-RPC error response 00:23:19.668 response: 00:23:19.668 { 00:23:19.668 "code": -5, 00:23:19.668 "message": "Input/output error" 00:23:19.668 } 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:19.668 rmmod nvme_tcp 00:23:19.668 rmmod nvme_fabrics 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2628697 ']' 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2628697 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 2628697 ']' 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 2628697 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2628697 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2628697' 00:23:19.668 killing process with pid 2628697 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 2628697 00:23:19.668 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 2628697 00:23:19.928 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:19.928 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:19.928 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:19.928 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:19.928 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:19.928 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.928 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.928 19:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.836 19:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:21.836 19:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:21.836 19:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:21.836 19:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:21.836 19:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:21.836 19:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:22.097 19:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:22.097 19:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:22.097 19:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:22.097 19:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:22.097 19:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:22.097 19:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:22.097 19:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:23.036 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:23:23.036 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:23:23.036 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:23:23.036 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:23:23.036 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:23:23.036 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:23:23.036 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:23:23.036 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:23:23.036 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:23:23.036 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:23:23.036 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:23:23.294 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:23:23.294 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:23:23.294 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:23:23.294 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:23:23.294 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:23:24.234 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:23:24.234 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Dud /tmp/spdk.key-null.HyB /tmp/spdk.key-sha256.Fea /tmp/spdk.key-sha384.C3l /tmp/spdk.key-sha512.2zz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:24.234 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:25.169 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:23:25.169 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:25.169 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:23:25.169 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:23:25.169 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:23:25.169 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:23:25.169 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:23:25.169 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:23:25.169 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:23:25.169 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:23:25.169 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:23:25.169 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:23:25.169 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:23:25.169 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:23:25.169 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:23:25.169 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:23:25.169 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:23:25.169 00:23:25.169 real 0m53.528s 00:23:25.169 user 0m50.856s 00:23:25.169 sys 0m5.293s 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.169 ************************************ 00:23:25.169 END TEST nvmf_auth_host 00:23:25.169 ************************************ 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.169 ************************************ 00:23:25.169 START TEST nvmf_digest 00:23:25.169 ************************************ 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:25.169 * Looking for test storage... 00:23:25.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:23:25.169 19:20:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:27.074 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:27.074 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:23:27.074 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:27.074 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:27.074 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:27.074 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:27.074 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:27.074 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:23:27.074 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:27.074 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:23:27.074 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:23:27.074 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:23:27.075 Found 0000:08:00.0 (0x8086 - 0x159b) 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:23:27.075 Found 0000:08:00.1 (0x8086 - 0x159b) 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:23:27.075 Found net devices under 0000:08:00.0: cvl_0_0 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:23:27.075 Found net devices under 0000:08:00.1: cvl_0_1 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:27.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:23:27.075 00:23:27.075 --- 10.0.0.2 ping statistics --- 00:23:27.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.075 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:27.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:23:27.075 00:23:27.075 --- 10.0.0.1 ping statistics --- 00:23:27.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.075 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:27.075 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:27.076 ************************************ 00:23:27.076 START TEST nvmf_digest_clean 00:23:27.076 ************************************ 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2636627 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2636627 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2636627 ']' 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:27.076 19:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:27.076 [2024-07-24 19:20:33.003988] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:23:27.076 [2024-07-24 19:20:33.004086] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.076 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.076 [2024-07-24 19:20:33.068951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.334 [2024-07-24 19:20:33.184675] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.334 [2024-07-24 19:20:33.184744] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.334 [2024-07-24 19:20:33.184760] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.334 [2024-07-24 19:20:33.184774] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.334 [2024-07-24 19:20:33.184785] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.334 [2024-07-24 19:20:33.184815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.334 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:27.334 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:27.334 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:27.334 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:27.334 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:27.334 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.334 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:23:27.334 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:23:27.334 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:23:27.334 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.334 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:27.591 null0 00:23:27.591 [2024-07-24 19:20:33.379080] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.591 [2024-07-24 19:20:33.403311] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.591 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.591 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:23:27.591 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:27.591 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:27.591 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:27.591 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:27.591 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:27.591 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:27.591 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2636732 00:23:27.591 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2636732 /var/tmp/bperf.sock 00:23:27.591 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2636732 ']' 00:23:27.591 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:27.591 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:27.591 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:27.591 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:27.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:27.591 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:27.591 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:27.591 [2024-07-24 19:20:33.456283] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:23:27.591 [2024-07-24 19:20:33.456377] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2636732 ] 00:23:27.591 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.591 [2024-07-24 19:20:33.517355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.848 [2024-07-24 19:20:33.634559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.848 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:27.848 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:27.848 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:27.848 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:27.848 19:20:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:28.106 19:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:28.106 19:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:28.671 nvme0n1 00:23:28.671 19:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:28.671 19:20:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:28.671 Running I/O for 2 seconds... 00:23:30.615 00:23:30.615 Latency(us) 00:23:30.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.615 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:30.615 nvme0n1 : 2.04 16951.29 66.22 0.00 0.00 7395.03 4029.25 44661.57 00:23:30.615 =================================================================================================================== 00:23:30.615 Total : 16951.29 66.22 0.00 0.00 7395.03 4029.25 44661.57 00:23:30.615 0 00:23:30.615 19:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:30.615 19:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:30.615 19:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:30.615 19:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:30.615 19:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:30.615 | select(.opcode=="crc32c") 00:23:30.615 | "\(.module_name) \(.executed)"' 00:23:30.896 19:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:30.896 19:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:30.896 19:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:30.896 19:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:30.896 19:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2636732 00:23:30.896 19:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2636732 ']' 00:23:30.896 19:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2636732 00:23:30.896 19:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:30.896 19:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:31.155 19:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2636732 00:23:31.155 19:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:31.155 19:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:31.155 19:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2636732' 00:23:31.155 killing process with pid 2636732 00:23:31.155 19:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2636732 00:23:31.155 Received shutdown signal, test time was about 2.000000 seconds 00:23:31.155 00:23:31.155 Latency(us) 00:23:31.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.155 =================================================================================================================== 00:23:31.155 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:31.155 19:20:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2636732 00:23:31.155 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:23:31.155 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:31.155 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:31.155 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:31.155 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:31.155 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:31.155 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:31.155 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2637049 00:23:31.155 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2637049 /var/tmp/bperf.sock 00:23:31.155 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:31.155 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2637049 ']' 00:23:31.155 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:31.155 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:31.155 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:31.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:31.155 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:31.155 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:31.413 [2024-07-24 19:20:37.186296] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:23:31.413 [2024-07-24 19:20:37.186387] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2637049 ] 00:23:31.414 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:31.414 Zero copy mechanism will not be used. 00:23:31.414 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.414 [2024-07-24 19:20:37.248105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.414 [2024-07-24 19:20:37.365161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.675 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:31.675 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:31.675 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:31.675 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:31.675 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:31.933 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:31.933 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:32.192 nvme0n1 00:23:32.452 19:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:32.452 19:20:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:32.452 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:32.452 Zero copy mechanism will not be used. 00:23:32.452 Running I/O for 2 seconds... 00:23:34.361 00:23:34.361 Latency(us) 00:23:34.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.361 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:34.361 nvme0n1 : 2.00 4883.45 610.43 0.00 0.00 3271.58 867.75 7670.14 00:23:34.361 =================================================================================================================== 00:23:34.361 Total : 4883.45 610.43 0.00 0.00 3271.58 867.75 7670.14 00:23:34.361 0 00:23:34.361 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:34.361 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:34.361 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:34.361 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:34.361 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:34.361 | select(.opcode=="crc32c") 00:23:34.361 | "\(.module_name) \(.executed)"' 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2637049 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2637049 ']' 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2637049 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2637049 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2637049' 00:23:34.930 killing process with pid 2637049 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2637049 00:23:34.930 Received shutdown signal, test time was about 2.000000 seconds 00:23:34.930 00:23:34.930 Latency(us) 00:23:34.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.930 =================================================================================================================== 00:23:34.930 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2637049 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2637367 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2637367 /var/tmp/bperf.sock 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2637367 ']' 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:34.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:34.930 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:34.930 [2024-07-24 19:20:40.932673] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:23:34.930 [2024-07-24 19:20:40.932771] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2637367 ] 00:23:35.189 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.189 [2024-07-24 19:20:40.992455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.189 [2024-07-24 19:20:41.109208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.189 19:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:35.189 19:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:35.189 19:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:35.189 19:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:35.189 19:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:35.757 19:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:35.757 19:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:36.015 nvme0n1 00:23:36.015 19:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:36.015 19:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:36.273 Running I/O for 2 seconds... 00:23:38.182 00:23:38.182 Latency(us) 00:23:38.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.182 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:38.182 nvme0n1 : 2.01 17631.11 68.87 0.00 0.00 7242.27 5291.43 12136.30 00:23:38.182 =================================================================================================================== 00:23:38.182 Total : 17631.11 68.87 0.00 0.00 7242.27 5291.43 12136.30 00:23:38.182 0 00:23:38.182 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:38.182 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:38.182 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:38.182 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:38.182 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:38.182 | select(.opcode=="crc32c") 00:23:38.182 | "\(.module_name) \(.executed)"' 00:23:38.440 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:38.440 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:38.440 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:38.440 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:38.440 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2637367 00:23:38.440 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2637367 ']' 00:23:38.440 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2637367 00:23:38.440 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:38.440 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:38.440 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2637367 00:23:38.440 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:38.440 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:38.440 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2637367' 00:23:38.440 killing process with pid 2637367 00:23:38.440 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2637367 00:23:38.440 Received shutdown signal, test time was about 2.000000 seconds 00:23:38.440 00:23:38.440 Latency(us) 00:23:38.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.440 =================================================================================================================== 00:23:38.440 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:38.440 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2637367 00:23:38.699 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:23:38.699 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:38.699 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:38.699 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:38.699 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:38.699 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:38.699 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:38.699 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2637762 00:23:38.699 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2637762 /var/tmp/bperf.sock 00:23:38.699 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:38.699 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2637762 ']' 00:23:38.699 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:38.699 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:38.699 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:38.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:38.699 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:38.699 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:38.699 [2024-07-24 19:20:44.661127] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:23:38.699 [2024-07-24 19:20:44.661221] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2637762 ] 00:23:38.699 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:38.699 Zero copy mechanism will not be used. 00:23:38.699 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.957 [2024-07-24 19:20:44.725966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.957 [2024-07-24 19:20:44.842902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.957 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:38.957 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:38.957 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:38.957 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:38.957 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:39.526 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:39.526 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:39.784 nvme0n1 00:23:39.784 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:39.784 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:39.784 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:39.784 Zero copy mechanism will not be used. 00:23:39.784 Running I/O for 2 seconds... 00:23:42.326 00:23:42.326 Latency(us) 00:23:42.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.326 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:42.326 nvme0n1 : 2.00 5047.71 630.96 0.00 0.00 3162.14 2427.26 7184.69 00:23:42.326 =================================================================================================================== 00:23:42.326 Total : 5047.71 630.96 0.00 0.00 3162.14 2427.26 7184.69 00:23:42.326 0 00:23:42.326 19:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:42.326 19:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:42.326 19:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:42.326 19:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:42.326 19:20:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:42.326 | select(.opcode=="crc32c") 00:23:42.326 | "\(.module_name) \(.executed)"' 00:23:42.326 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:42.326 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:42.326 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:42.326 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:42.326 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2637762 00:23:42.326 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2637762 ']' 00:23:42.326 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2637762 00:23:42.326 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:42.326 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:42.326 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2637762 00:23:42.326 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:42.327 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:42.327 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2637762' 00:23:42.327 killing process with pid 2637762 00:23:42.327 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2637762 00:23:42.327 Received shutdown signal, test time was about 2.000000 seconds 00:23:42.327 00:23:42.327 Latency(us) 00:23:42.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.327 =================================================================================================================== 00:23:42.327 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:42.327 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2637762 00:23:42.586 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2636627 00:23:42.586 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2636627 ']' 00:23:42.586 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2636627 00:23:42.586 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:42.586 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:42.586 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2636627 00:23:42.586 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:42.586 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:42.586 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2636627' 00:23:42.586 killing process with pid 2636627 00:23:42.586 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2636627 00:23:42.586 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2636627 00:23:42.586 00:23:42.586 real 0m15.643s 00:23:42.586 user 0m31.956s 00:23:42.586 sys 0m3.940s 00:23:42.586 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:42.586 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:42.586 ************************************ 00:23:42.586 END TEST nvmf_digest_clean 00:23:42.586 ************************************ 00:23:42.845 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:23:42.845 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:42.845 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:42.845 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:42.845 ************************************ 00:23:42.845 START TEST nvmf_digest_error 00:23:42.845 ************************************ 00:23:42.845 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:23:42.845 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:23:42.845 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:42.845 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:42.845 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:42.845 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2638105 00:23:42.845 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:42.845 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2638105 00:23:42.845 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2638105 ']' 00:23:42.845 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.845 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:42.845 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.845 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:42.845 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:42.845 [2024-07-24 19:20:48.700705] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:23:42.845 [2024-07-24 19:20:48.700808] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.845 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.845 [2024-07-24 19:20:48.765453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.104 [2024-07-24 19:20:48.881231] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.104 [2024-07-24 19:20:48.881289] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.104 [2024-07-24 19:20:48.881305] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.104 [2024-07-24 19:20:48.881318] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.104 [2024-07-24 19:20:48.881329] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.104 [2024-07-24 19:20:48.881358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.104 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:43.104 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:43.104 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:43.104 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:43.104 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:43.104 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.104 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:23:43.104 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.104 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:43.104 [2024-07-24 19:20:48.978013] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:23:43.104 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.104 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:23:43.104 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:23:43.104 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.104 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:43.104 null0 00:23:43.104 [2024-07-24 19:20:49.085926] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.104 [2024-07-24 19:20:49.110171] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.104 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.104 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:23:43.104 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:43.104 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:43.104 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:43.104 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:43.104 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2638214 00:23:43.104 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2638214 /var/tmp/bperf.sock 00:23:43.104 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2638214 ']' 00:23:43.104 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:43.104 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:43.104 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:23:43.104 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:43.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:43.104 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:43.104 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:43.363 [2024-07-24 19:20:49.163162] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:23:43.363 [2024-07-24 19:20:49.163253] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2638214 ] 00:23:43.363 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.363 [2024-07-24 19:20:49.224362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.363 [2024-07-24 19:20:49.341372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.621 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:43.621 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:43.621 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:43.621 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:43.879 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:43.879 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.879 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:43.879 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.879 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:43.879 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:44.137 nvme0n1 00:23:44.137 19:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:44.137 19:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.137 19:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:44.395 19:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.395 19:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:44.395 19:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:44.395 Running I/O for 2 seconds... 00:23:44.395 [2024-07-24 19:20:50.300202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.395 [2024-07-24 19:20:50.300261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.395 [2024-07-24 19:20:50.300282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.395 [2024-07-24 19:20:50.318395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.395 [2024-07-24 19:20:50.318432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.395 [2024-07-24 19:20:50.318451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.395 [2024-07-24 19:20:50.334630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.395 [2024-07-24 19:20:50.334665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.395 [2024-07-24 19:20:50.334684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.395 [2024-07-24 19:20:50.348060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.395 [2024-07-24 19:20:50.348093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.395 [2024-07-24 19:20:50.348122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.395 [2024-07-24 19:20:50.362120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.395 [2024-07-24 19:20:50.362154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.395 [2024-07-24 19:20:50.362172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.395 [2024-07-24 19:20:50.377728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.395 [2024-07-24 19:20:50.377760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.395 [2024-07-24 19:20:50.377779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.395 [2024-07-24 19:20:50.392544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.395 [2024-07-24 19:20:50.392576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.395 [2024-07-24 19:20:50.392595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.395 [2024-07-24 19:20:50.405879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.395 [2024-07-24 19:20:50.405914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.395 [2024-07-24 19:20:50.405934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.653 [2024-07-24 19:20:50.423331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.653 [2024-07-24 19:20:50.423367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.653 [2024-07-24 19:20:50.423386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.653 [2024-07-24 19:20:50.439775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.653 [2024-07-24 19:20:50.439809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.653 [2024-07-24 19:20:50.439828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.653 [2024-07-24 19:20:50.453047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.653 [2024-07-24 19:20:50.453088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.653 [2024-07-24 19:20:50.453106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.653 [2024-07-24 19:20:50.467422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.653 [2024-07-24 19:20:50.467456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.653 [2024-07-24 19:20:50.467474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.653 [2024-07-24 19:20:50.484240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.653 [2024-07-24 19:20:50.484279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.653 [2024-07-24 19:20:50.484299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.653 [2024-07-24 19:20:50.497609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.653 [2024-07-24 19:20:50.497642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.653 [2024-07-24 19:20:50.497661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.653 [2024-07-24 19:20:50.514274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.653 [2024-07-24 19:20:50.514307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.653 [2024-07-24 19:20:50.514325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.653 [2024-07-24 19:20:50.530767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.653 [2024-07-24 19:20:50.530800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.653 [2024-07-24 19:20:50.530819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.653 [2024-07-24 19:20:50.543391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.653 [2024-07-24 19:20:50.543423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.653 [2024-07-24 19:20:50.543442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.653 [2024-07-24 19:20:50.561074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.653 [2024-07-24 19:20:50.561108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.653 [2024-07-24 19:20:50.561126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.653 [2024-07-24 19:20:50.579275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.653 [2024-07-24 19:20:50.579310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.653 [2024-07-24 19:20:50.579328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.653 [2024-07-24 19:20:50.591989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.653 [2024-07-24 19:20:50.592027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.653 [2024-07-24 19:20:50.592045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.653 [2024-07-24 19:20:50.608824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.653 [2024-07-24 19:20:50.608856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.653 [2024-07-24 19:20:50.608875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.653 [2024-07-24 19:20:50.621309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.653 [2024-07-24 19:20:50.621341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.653 [2024-07-24 19:20:50.621359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.653 [2024-07-24 19:20:50.638572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.653 [2024-07-24 19:20:50.638604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.653 [2024-07-24 19:20:50.638623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.653 [2024-07-24 19:20:50.657391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.653 [2024-07-24 19:20:50.657428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.653 [2024-07-24 19:20:50.657446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.911 [2024-07-24 19:20:50.674556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.911 [2024-07-24 19:20:50.674591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.911 [2024-07-24 19:20:50.674610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.911 [2024-07-24 19:20:50.687865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.911 [2024-07-24 19:20:50.687898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.911 [2024-07-24 19:20:50.687918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.911 [2024-07-24 19:20:50.704606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.911 [2024-07-24 19:20:50.704640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.911 [2024-07-24 19:20:50.704659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.911 [2024-07-24 19:20:50.717659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.911 [2024-07-24 19:20:50.717699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.911 [2024-07-24 19:20:50.717718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.911 [2024-07-24 19:20:50.734375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.911 [2024-07-24 19:20:50.734409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.911 [2024-07-24 19:20:50.734428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.911 [2024-07-24 19:20:50.746875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.911 [2024-07-24 19:20:50.746907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.911 [2024-07-24 19:20:50.746937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.911 [2024-07-24 19:20:50.761520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.911 [2024-07-24 19:20:50.761552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.911 [2024-07-24 19:20:50.761570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.911 [2024-07-24 19:20:50.776114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.911 [2024-07-24 19:20:50.776146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.911 [2024-07-24 19:20:50.776164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.911 [2024-07-24 19:20:50.791215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.911 [2024-07-24 19:20:50.791247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.911 [2024-07-24 19:20:50.791266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.911 [2024-07-24 19:20:50.806023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.911 [2024-07-24 19:20:50.806054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.911 [2024-07-24 19:20:50.806072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.911 [2024-07-24 19:20:50.818660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.911 [2024-07-24 19:20:50.818691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.911 [2024-07-24 19:20:50.818710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.911 [2024-07-24 19:20:50.833501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.911 [2024-07-24 19:20:50.833535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.911 [2024-07-24 19:20:50.833553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.911 [2024-07-24 19:20:50.847866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.911 [2024-07-24 19:20:50.847898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.911 [2024-07-24 19:20:50.847917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.911 [2024-07-24 19:20:50.863791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.912 [2024-07-24 19:20:50.863831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.912 [2024-07-24 19:20:50.863849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.912 [2024-07-24 19:20:50.879917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.912 [2024-07-24 19:20:50.879949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.912 [2024-07-24 19:20:50.879968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.912 [2024-07-24 19:20:50.892368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.912 [2024-07-24 19:20:50.892400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.912 [2024-07-24 19:20:50.892418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.912 [2024-07-24 19:20:50.910253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.912 [2024-07-24 19:20:50.910285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.912 [2024-07-24 19:20:50.910303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.912 [2024-07-24 19:20:50.924451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:44.912 [2024-07-24 19:20:50.924492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.912 [2024-07-24 19:20:50.924514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.170 [2024-07-24 19:20:50.941796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.170 [2024-07-24 19:20:50.941831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.170 [2024-07-24 19:20:50.941850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.170 [2024-07-24 19:20:50.958167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.170 [2024-07-24 19:20:50.958200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.170 [2024-07-24 19:20:50.958219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.170 [2024-07-24 19:20:50.970854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.170 [2024-07-24 19:20:50.970886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.170 [2024-07-24 19:20:50.970905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.170 [2024-07-24 19:20:50.988816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.170 [2024-07-24 19:20:50.988855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.170 [2024-07-24 19:20:50.988874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.170 [2024-07-24 19:20:51.002077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.170 [2024-07-24 19:20:51.002111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.170 [2024-07-24 19:20:51.002137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.170 [2024-07-24 19:20:51.019003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.170 [2024-07-24 19:20:51.019041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.170 [2024-07-24 19:20:51.019060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.170 [2024-07-24 19:20:51.030636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.170 [2024-07-24 19:20:51.030670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.170 [2024-07-24 19:20:51.030688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.170 [2024-07-24 19:20:51.048649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.170 [2024-07-24 19:20:51.048683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.170 [2024-07-24 19:20:51.048701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.170 [2024-07-24 19:20:51.066129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.170 [2024-07-24 19:20:51.066162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.170 [2024-07-24 19:20:51.066180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.170 [2024-07-24 19:20:51.078593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.170 [2024-07-24 19:20:51.078626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.170 [2024-07-24 19:20:51.078645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.170 [2024-07-24 19:20:51.095649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.170 [2024-07-24 19:20:51.095681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.170 [2024-07-24 19:20:51.095699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.170 [2024-07-24 19:20:51.109692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.170 [2024-07-24 19:20:51.109725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.170 [2024-07-24 19:20:51.109743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.170 [2024-07-24 19:20:51.126341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.170 [2024-07-24 19:20:51.126373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.170 [2024-07-24 19:20:51.126392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.170 [2024-07-24 19:20:51.142831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.170 [2024-07-24 19:20:51.142871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.171 [2024-07-24 19:20:51.142891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.171 [2024-07-24 19:20:51.156458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.171 [2024-07-24 19:20:51.156498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.171 [2024-07-24 19:20:51.156518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.171 [2024-07-24 19:20:51.173910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.171 [2024-07-24 19:20:51.173943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.171 [2024-07-24 19:20:51.173962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.429 [2024-07-24 19:20:51.189501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.429 [2024-07-24 19:20:51.189537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.429 [2024-07-24 19:20:51.189556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.429 [2024-07-24 19:20:51.203590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.429 [2024-07-24 19:20:51.203623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.429 [2024-07-24 19:20:51.203643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.429 [2024-07-24 19:20:51.217493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.429 [2024-07-24 19:20:51.217525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.429 [2024-07-24 19:20:51.217543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.429 [2024-07-24 19:20:51.231809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.429 [2024-07-24 19:20:51.231841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.429 [2024-07-24 19:20:51.231859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.429 [2024-07-24 19:20:51.246186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.429 [2024-07-24 19:20:51.246217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.429 [2024-07-24 19:20:51.246235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.429 [2024-07-24 19:20:51.260357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.429 [2024-07-24 19:20:51.260388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.429 [2024-07-24 19:20:51.260406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.429 [2024-07-24 19:20:51.274699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.429 [2024-07-24 19:20:51.274732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.429 [2024-07-24 19:20:51.274750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.429 [2024-07-24 19:20:51.290719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.429 [2024-07-24 19:20:51.290751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.429 [2024-07-24 19:20:51.290770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.429 [2024-07-24 19:20:51.304442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.429 [2024-07-24 19:20:51.304474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.429 [2024-07-24 19:20:51.304500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.429 [2024-07-24 19:20:51.321413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.429 [2024-07-24 19:20:51.321446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.429 [2024-07-24 19:20:51.321464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.429 [2024-07-24 19:20:51.335038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.429 [2024-07-24 19:20:51.335070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.429 [2024-07-24 19:20:51.335088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.429 [2024-07-24 19:20:51.351456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.429 [2024-07-24 19:20:51.351498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.429 [2024-07-24 19:20:51.351518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.429 [2024-07-24 19:20:51.368703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.429 [2024-07-24 19:20:51.368735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.429 [2024-07-24 19:20:51.368753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.429 [2024-07-24 19:20:51.382144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.429 [2024-07-24 19:20:51.382176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.429 [2024-07-24 19:20:51.382194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.429 [2024-07-24 19:20:51.399057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.429 [2024-07-24 19:20:51.399089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.429 [2024-07-24 19:20:51.399113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.429 [2024-07-24 19:20:51.412650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.429 [2024-07-24 19:20:51.412682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.429 [2024-07-24 19:20:51.412701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.429 [2024-07-24 19:20:51.428137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.429 [2024-07-24 19:20:51.428170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.429 [2024-07-24 19:20:51.428188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.429 [2024-07-24 19:20:51.442630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.429 [2024-07-24 19:20:51.442668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.429 [2024-07-24 19:20:51.442698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.688 [2024-07-24 19:20:51.457573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.688 [2024-07-24 19:20:51.457608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.688 [2024-07-24 19:20:51.457627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.688 [2024-07-24 19:20:51.470600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.688 [2024-07-24 19:20:51.470632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.688 [2024-07-24 19:20:51.470651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.688 [2024-07-24 19:20:51.485694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.688 [2024-07-24 19:20:51.485727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.688 [2024-07-24 19:20:51.485746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.688 [2024-07-24 19:20:51.500892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.688 [2024-07-24 19:20:51.500925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.688 [2024-07-24 19:20:51.500944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.688 [2024-07-24 19:20:51.516339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.688 [2024-07-24 19:20:51.516371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.688 [2024-07-24 19:20:51.516390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.688 [2024-07-24 19:20:51.527935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.688 [2024-07-24 19:20:51.527974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.688 [2024-07-24 19:20:51.527993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.688 [2024-07-24 19:20:51.544870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.688 [2024-07-24 19:20:51.544902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.688 [2024-07-24 19:20:51.544921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.688 [2024-07-24 19:20:51.558973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.688 [2024-07-24 19:20:51.559006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.688 [2024-07-24 19:20:51.559025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.688 [2024-07-24 19:20:51.574024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.688 [2024-07-24 19:20:51.574056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.688 [2024-07-24 19:20:51.574075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.688 [2024-07-24 19:20:51.590925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.688 [2024-07-24 19:20:51.590957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.688 [2024-07-24 19:20:51.590975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.688 [2024-07-24 19:20:51.604787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.688 [2024-07-24 19:20:51.604819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.688 [2024-07-24 19:20:51.604837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.688 [2024-07-24 19:20:51.622772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.688 [2024-07-24 19:20:51.622805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.688 [2024-07-24 19:20:51.622824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.689 [2024-07-24 19:20:51.639394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.689 [2024-07-24 19:20:51.639427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.689 [2024-07-24 19:20:51.639446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.689 [2024-07-24 19:20:51.654375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.689 [2024-07-24 19:20:51.654408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.689 [2024-07-24 19:20:51.654427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.689 [2024-07-24 19:20:51.666841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.689 [2024-07-24 19:20:51.666874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.689 [2024-07-24 19:20:51.666892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.689 [2024-07-24 19:20:51.680919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.689 [2024-07-24 19:20:51.680951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.689 [2024-07-24 19:20:51.680970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.689 [2024-07-24 19:20:51.695285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.689 [2024-07-24 19:20:51.695317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.689 [2024-07-24 19:20:51.695335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.947 [2024-07-24 19:20:51.710666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.947 [2024-07-24 19:20:51.710702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.947 [2024-07-24 19:20:51.710722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.947 [2024-07-24 19:20:51.726262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.947 [2024-07-24 19:20:51.726296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.947 [2024-07-24 19:20:51.726315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.947 [2024-07-24 19:20:51.738777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.947 [2024-07-24 19:20:51.738810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.947 [2024-07-24 19:20:51.738829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.947 [2024-07-24 19:20:51.755466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.947 [2024-07-24 19:20:51.755505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.947 [2024-07-24 19:20:51.755524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.947 [2024-07-24 19:20:51.772131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.947 [2024-07-24 19:20:51.772163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.947 [2024-07-24 19:20:51.772182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.947 [2024-07-24 19:20:51.785775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.947 [2024-07-24 19:20:51.785809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.947 [2024-07-24 19:20:51.785835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.947 [2024-07-24 19:20:51.801994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.947 [2024-07-24 19:20:51.802026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.947 [2024-07-24 19:20:51.802044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.947 [2024-07-24 19:20:51.815847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.947 [2024-07-24 19:20:51.815879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.947 [2024-07-24 19:20:51.815897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.947 [2024-07-24 19:20:51.832763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.947 [2024-07-24 19:20:51.832795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.947 [2024-07-24 19:20:51.832813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.947 [2024-07-24 19:20:51.849783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.947 [2024-07-24 19:20:51.849816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.947 [2024-07-24 19:20:51.849834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.947 [2024-07-24 19:20:51.862908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.947 [2024-07-24 19:20:51.862940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.947 [2024-07-24 19:20:51.862958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.947 [2024-07-24 19:20:51.879331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.947 [2024-07-24 19:20:51.879363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.947 [2024-07-24 19:20:51.879381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.947 [2024-07-24 19:20:51.892830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.947 [2024-07-24 19:20:51.892862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.947 [2024-07-24 19:20:51.892880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.947 [2024-07-24 19:20:51.909714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.947 [2024-07-24 19:20:51.909746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.947 [2024-07-24 19:20:51.909764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.947 [2024-07-24 19:20:51.922837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.947 [2024-07-24 19:20:51.922869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.947 [2024-07-24 19:20:51.922888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.947 [2024-07-24 19:20:51.936626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.948 [2024-07-24 19:20:51.936658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.948 [2024-07-24 19:20:51.936684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.948 [2024-07-24 19:20:51.951831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:45.948 [2024-07-24 19:20:51.951863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.948 [2024-07-24 19:20:51.951896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.206 [2024-07-24 19:20:51.968107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:46.206 [2024-07-24 19:20:51.968145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.206 [2024-07-24 19:20:51.968164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.206 [2024-07-24 19:20:51.980826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:46.206 [2024-07-24 19:20:51.980862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.206 [2024-07-24 19:20:51.980880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.206 [2024-07-24 19:20:51.996170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:46.206 [2024-07-24 19:20:51.996203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.206 [2024-07-24 19:20:51.996221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.206 [2024-07-24 19:20:52.012883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:46.206 [2024-07-24 19:20:52.012918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.206 [2024-07-24 19:20:52.012937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.206 [2024-07-24 19:20:52.024685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:46.206 [2024-07-24 19:20:52.024718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.206 [2024-07-24 19:20:52.024736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.206 [2024-07-24 19:20:52.043143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:46.206 [2024-07-24 19:20:52.043184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.206 [2024-07-24 19:20:52.043209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.206 [2024-07-24 19:20:52.060758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:46.206 [2024-07-24 19:20:52.060789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.206 [2024-07-24 19:20:52.060807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.206 [2024-07-24 19:20:52.077844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:46.206 [2024-07-24 19:20:52.077877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.206 [2024-07-24 19:20:52.077896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.206 [2024-07-24 19:20:52.093803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:46.206 [2024-07-24 19:20:52.093835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.206 [2024-07-24 19:20:52.093854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.206 [2024-07-24 19:20:52.111325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:46.206 [2024-07-24 19:20:52.111357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.206 [2024-07-24 19:20:52.111375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.206 [2024-07-24 19:20:52.126545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:46.206 [2024-07-24 19:20:52.126578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.206 [2024-07-24 19:20:52.126601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.206 [2024-07-24 19:20:52.142368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:46.206 [2024-07-24 19:20:52.142401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.206 [2024-07-24 19:20:52.142419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.206 [2024-07-24 19:20:52.160743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:46.206 [2024-07-24 19:20:52.160776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.206 [2024-07-24 19:20:52.160794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.206 [2024-07-24 19:20:52.178966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:46.206 [2024-07-24 19:20:52.178999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.206 [2024-07-24 19:20:52.179017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.206 [2024-07-24 19:20:52.191259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:46.206 [2024-07-24 19:20:52.191297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.206 [2024-07-24 19:20:52.191317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.206 [2024-07-24 19:20:52.208671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:46.206 [2024-07-24 19:20:52.208703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.206 [2024-07-24 19:20:52.208721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.465 [2024-07-24 19:20:52.227562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:46.465 [2024-07-24 19:20:52.227597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.465 [2024-07-24 19:20:52.227617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.465 [2024-07-24 19:20:52.246415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:46.465 [2024-07-24 19:20:52.246456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.465 [2024-07-24 19:20:52.246474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.465 [2024-07-24 19:20:52.265319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:46.465 [2024-07-24 19:20:52.265368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.465 [2024-07-24 19:20:52.265389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.465 [2024-07-24 19:20:52.283849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdca2c0) 00:23:46.465 [2024-07-24 19:20:52.283881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.465 [2024-07-24 19:20:52.283900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.465 00:23:46.465 Latency(us) 00:23:46.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.465 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:46.465 nvme0n1 : 2.05 16232.49 63.41 0.00 0.00 7719.53 3956.43 50875.35 00:23:46.465 =================================================================================================================== 00:23:46.465 Total : 16232.49 63.41 0.00 0.00 7719.53 3956.43 50875.35 00:23:46.465 0 00:23:46.465 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:46.465 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:46.465 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:46.465 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:46.465 | .driver_specific 00:23:46.465 | .nvme_error 00:23:46.465 | .status_code 00:23:46.465 | .command_transient_transport_error' 00:23:46.723 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 130 > 0 )) 00:23:46.723 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2638214 00:23:46.723 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2638214 ']' 00:23:46.723 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2638214 00:23:46.723 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:23:46.723 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:46.723 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2638214 00:23:46.723 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:46.723 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:46.723 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2638214' 00:23:46.723 killing process with pid 2638214 00:23:46.723 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2638214 00:23:46.723 Received shutdown signal, test time was about 2.000000 seconds 00:23:46.723 00:23:46.723 Latency(us) 00:23:46.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.723 =================================================================================================================== 00:23:46.723 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:46.723 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2638214 00:23:46.982 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:23:46.982 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:46.982 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:46.982 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:23:46.982 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:23:46.982 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2638531 00:23:46.982 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2638531 /var/tmp/bperf.sock 00:23:46.982 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:23:46.982 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2638531 ']' 00:23:46.982 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:46.982 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:46.982 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:46.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:46.982 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:46.982 19:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:46.982 [2024-07-24 19:20:52.921972] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:23:46.982 [2024-07-24 19:20:52.922069] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2638531 ] 00:23:46.982 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:46.982 Zero copy mechanism will not be used. 00:23:46.982 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.982 [2024-07-24 19:20:52.981683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.240 [2024-07-24 19:20:53.098535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.240 19:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:47.240 19:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:47.240 19:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:47.240 19:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:47.498 19:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:47.498 19:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.498 19:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:47.498 19:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.498 19:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:47.498 19:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:47.756 nvme0n1 00:23:47.756 19:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:47.756 19:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.756 19:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:48.014 19:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.014 19:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:48.014 19:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:48.014 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:48.014 Zero copy mechanism will not be used. 00:23:48.014 Running I/O for 2 seconds... 00:23:48.014 [2024-07-24 19:20:53.887497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.014 [2024-07-24 19:20:53.887557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.014 [2024-07-24 19:20:53.887578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.014 [2024-07-24 19:20:53.894292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.014 [2024-07-24 19:20:53.894329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.014 [2024-07-24 19:20:53.894349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.014 [2024-07-24 19:20:53.901726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.014 [2024-07-24 19:20:53.901762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.014 [2024-07-24 19:20:53.901781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.014 [2024-07-24 19:20:53.907961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.014 [2024-07-24 19:20:53.907997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.014 [2024-07-24 19:20:53.908016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.014 [2024-07-24 19:20:53.914920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.014 [2024-07-24 19:20:53.914955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.014 [2024-07-24 19:20:53.914975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.014 [2024-07-24 19:20:53.921718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.014 [2024-07-24 19:20:53.921753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.014 [2024-07-24 19:20:53.921772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.014 [2024-07-24 19:20:53.928431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.014 [2024-07-24 19:20:53.928465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.014 [2024-07-24 19:20:53.928491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.014 [2024-07-24 19:20:53.936006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.014 [2024-07-24 19:20:53.936042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.014 [2024-07-24 19:20:53.936061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.014 [2024-07-24 19:20:53.943649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.014 [2024-07-24 19:20:53.943685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.014 [2024-07-24 19:20:53.943704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.014 [2024-07-24 19:20:53.950547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.014 [2024-07-24 19:20:53.950581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.014 [2024-07-24 19:20:53.950600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.014 [2024-07-24 19:20:53.957992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.014 [2024-07-24 19:20:53.958028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.014 [2024-07-24 19:20:53.958047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.014 [2024-07-24 19:20:53.966069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.014 [2024-07-24 19:20:53.966111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.014 [2024-07-24 19:20:53.966132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.014 [2024-07-24 19:20:53.973267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.014 [2024-07-24 19:20:53.973302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.014 [2024-07-24 19:20:53.973322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.014 [2024-07-24 19:20:53.980600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.014 [2024-07-24 19:20:53.980635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.014 [2024-07-24 19:20:53.980654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.014 [2024-07-24 19:20:53.987332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.014 [2024-07-24 19:20:53.987366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.014 [2024-07-24 19:20:53.987385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.014 [2024-07-24 19:20:53.994820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.014 [2024-07-24 19:20:53.994854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.014 [2024-07-24 19:20:53.994873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.014 [2024-07-24 19:20:54.002667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.014 [2024-07-24 19:20:54.002702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.014 [2024-07-24 19:20:54.002721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.014 [2024-07-24 19:20:54.010411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.014 [2024-07-24 19:20:54.010446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.014 [2024-07-24 19:20:54.010465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.014 [2024-07-24 19:20:54.018912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.014 [2024-07-24 19:20:54.018947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.015 [2024-07-24 19:20:54.018966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.273 [2024-07-24 19:20:54.029067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.273 [2024-07-24 19:20:54.029105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.273 [2024-07-24 19:20:54.029125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.273 [2024-07-24 19:20:54.037663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.273 [2024-07-24 19:20:54.037701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.273 [2024-07-24 19:20:54.037721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.273 [2024-07-24 19:20:54.047350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.273 [2024-07-24 19:20:54.047391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.273 [2024-07-24 19:20:54.047411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.273 [2024-07-24 19:20:54.055690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.273 [2024-07-24 19:20:54.055726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.273 [2024-07-24 19:20:54.055745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.273 [2024-07-24 19:20:54.063937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.273 [2024-07-24 19:20:54.063973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.273 [2024-07-24 19:20:54.063993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.273 [2024-07-24 19:20:54.073158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.273 [2024-07-24 19:20:54.073195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.273 [2024-07-24 19:20:54.073214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.273 [2024-07-24 19:20:54.081615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.273 [2024-07-24 19:20:54.081652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.273 [2024-07-24 19:20:54.081671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.273 [2024-07-24 19:20:54.091020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.273 [2024-07-24 19:20:54.091055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.273 [2024-07-24 19:20:54.091075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.273 [2024-07-24 19:20:54.099750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.273 [2024-07-24 19:20:54.099787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.273 [2024-07-24 19:20:54.099806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.273 [2024-07-24 19:20:54.109089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.273 [2024-07-24 19:20:54.109124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.273 [2024-07-24 19:20:54.109152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.273 [2024-07-24 19:20:54.118043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.273 [2024-07-24 19:20:54.118078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.273 [2024-07-24 19:20:54.118097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.273 [2024-07-24 19:20:54.126016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.273 [2024-07-24 19:20:54.126052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.273 [2024-07-24 19:20:54.126070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.273 [2024-07-24 19:20:54.133952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.273 [2024-07-24 19:20:54.133989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.273 [2024-07-24 19:20:54.134008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.273 [2024-07-24 19:20:54.141630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.273 [2024-07-24 19:20:54.141665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.273 [2024-07-24 19:20:54.141683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.273 [2024-07-24 19:20:54.149517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.149554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.149572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.274 [2024-07-24 19:20:54.156107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.156142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.156160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.274 [2024-07-24 19:20:54.162385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.162425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.162444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.274 [2024-07-24 19:20:54.168872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.168907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.168926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.274 [2024-07-24 19:20:54.175153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.175194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.175212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.274 [2024-07-24 19:20:54.180675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.180709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.180727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.274 [2024-07-24 19:20:54.184143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.184176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.184194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.274 [2024-07-24 19:20:54.190164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.190197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.190215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.274 [2024-07-24 19:20:54.196390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.196425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.196443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.274 [2024-07-24 19:20:54.202711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.202745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.202763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.274 [2024-07-24 19:20:54.208908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.208942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.208960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.274 [2024-07-24 19:20:54.215650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.215685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.215704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.274 [2024-07-24 19:20:54.222026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.222061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.222080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.274 [2024-07-24 19:20:54.228377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.228411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.228429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.274 [2024-07-24 19:20:54.234857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.234892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.234910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.274 [2024-07-24 19:20:54.241199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.241234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.241252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.274 [2024-07-24 19:20:54.247299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.247334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.247352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.274 [2024-07-24 19:20:54.253383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.253417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.253435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.274 [2024-07-24 19:20:54.259454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.259493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.259513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.274 [2024-07-24 19:20:54.265711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.265752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.265771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.274 [2024-07-24 19:20:54.271989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.272026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.272044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.274 [2024-07-24 19:20:54.278226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.278261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.278288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.274 [2024-07-24 19:20:54.284868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.274 [2024-07-24 19:20:54.284910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.274 [2024-07-24 19:20:54.284930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.533 [2024-07-24 19:20:54.291336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.533 [2024-07-24 19:20:54.291379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.533 [2024-07-24 19:20:54.291399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.533 [2024-07-24 19:20:54.297694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.533 [2024-07-24 19:20:54.297735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.533 [2024-07-24 19:20:54.297753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.533 [2024-07-24 19:20:54.303892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.533 [2024-07-24 19:20:54.303929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.533 [2024-07-24 19:20:54.303948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.533 [2024-07-24 19:20:54.310132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.533 [2024-07-24 19:20:54.310171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.533 [2024-07-24 19:20:54.310189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.533 [2024-07-24 19:20:54.316332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.533 [2024-07-24 19:20:54.316370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.533 [2024-07-24 19:20:54.316389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.533 [2024-07-24 19:20:54.322654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.322691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.322709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.328779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.328815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.328834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.335007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.335044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.335062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.341203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.341239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.341258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.347399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.347435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.347453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.353582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.353621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.353639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.359800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.359838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.359856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.367514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.367553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.367571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.375539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.375578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.375597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.383261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.383301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.383324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.390595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.390635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.390666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.396778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.396814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.396832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.403085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.403124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.403142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.409351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.409388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.409406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.416008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.416046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.416066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.423860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.423899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.423919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.431863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.431903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.431922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.439328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.439368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.439387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.446191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.446232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.446251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.453632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.453682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.453702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.461345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.461386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.461404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.469068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.469107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.469126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.476586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.476622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.534 [2024-07-24 19:20:54.476641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.534 [2024-07-24 19:20:54.484240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.534 [2024-07-24 19:20:54.484280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.535 [2024-07-24 19:20:54.484299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.535 [2024-07-24 19:20:54.491835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.535 [2024-07-24 19:20:54.491873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.535 [2024-07-24 19:20:54.491891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.535 [2024-07-24 19:20:54.498352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.535 [2024-07-24 19:20:54.498387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.535 [2024-07-24 19:20:54.498406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.535 [2024-07-24 19:20:54.504628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.535 [2024-07-24 19:20:54.504665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.535 [2024-07-24 19:20:54.504684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.535 [2024-07-24 19:20:54.510902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.535 [2024-07-24 19:20:54.510939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.535 [2024-07-24 19:20:54.510958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.535 [2024-07-24 19:20:54.517151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.535 [2024-07-24 19:20:54.517187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.535 [2024-07-24 19:20:54.517205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.535 [2024-07-24 19:20:54.523416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.535 [2024-07-24 19:20:54.523451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.535 [2024-07-24 19:20:54.523468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.535 [2024-07-24 19:20:54.530875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.535 [2024-07-24 19:20:54.530916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.535 [2024-07-24 19:20:54.530935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.535 [2024-07-24 19:20:54.538621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.535 [2024-07-24 19:20:54.538662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.535 [2024-07-24 19:20:54.538682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.535 [2024-07-24 19:20:54.546289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.535 [2024-07-24 19:20:54.546334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.535 [2024-07-24 19:20:54.546356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.793 [2024-07-24 19:20:54.554069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.793 [2024-07-24 19:20:54.554113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.793 [2024-07-24 19:20:54.554132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.793 [2024-07-24 19:20:54.561883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.793 [2024-07-24 19:20:54.561924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.561944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.569274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.569314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.569333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.575927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.575965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.575999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.580837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.580872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.580890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.588195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.588234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.588253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.594712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.594751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.594769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.600978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.601016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.601034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.607246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.607284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.607303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.614010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.614049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.614068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.620466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.620514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.620534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.627133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.627173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.627193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.633991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.634045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.634066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.641844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.641885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.641905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.648681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.648720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.648739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.655653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.655695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.655713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.662671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.662710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.662728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.670346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.670387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.670405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.678021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.678061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.678080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.683020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.683057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.683076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.691355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.691393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.691412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.699611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.699654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.699673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.708035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.708079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.708098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.717066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.717108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.717127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.725365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.725404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.725424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.732916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.732958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.732978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.794 [2024-07-24 19:20:54.741100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.794 [2024-07-24 19:20:54.741141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.794 [2024-07-24 19:20:54.741160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.795 [2024-07-24 19:20:54.748420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.795 [2024-07-24 19:20:54.748460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.795 [2024-07-24 19:20:54.748487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.795 [2024-07-24 19:20:54.755549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.795 [2024-07-24 19:20:54.755587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.795 [2024-07-24 19:20:54.755605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.795 [2024-07-24 19:20:54.762441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.795 [2024-07-24 19:20:54.762486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.795 [2024-07-24 19:20:54.762520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.795 [2024-07-24 19:20:54.769503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.795 [2024-07-24 19:20:54.769564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.795 [2024-07-24 19:20:54.769584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.795 [2024-07-24 19:20:54.776284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.795 [2024-07-24 19:20:54.776323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.795 [2024-07-24 19:20:54.776342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.795 [2024-07-24 19:20:54.782664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.795 [2024-07-24 19:20:54.782701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.795 [2024-07-24 19:20:54.782719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.795 [2024-07-24 19:20:54.788960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.795 [2024-07-24 19:20:54.788996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.795 [2024-07-24 19:20:54.789014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.795 [2024-07-24 19:20:54.795262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.795 [2024-07-24 19:20:54.795299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.795 [2024-07-24 19:20:54.795317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.795 [2024-07-24 19:20:54.801797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:48.795 [2024-07-24 19:20:54.801834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.795 [2024-07-24 19:20:54.801853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.054 [2024-07-24 19:20:54.809183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.054 [2024-07-24 19:20:54.809228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.054 [2024-07-24 19:20:54.809247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.054 [2024-07-24 19:20:54.816372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.054 [2024-07-24 19:20:54.816416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.054 [2024-07-24 19:20:54.816435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.054 [2024-07-24 19:20:54.823529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.054 [2024-07-24 19:20:54.823570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.054 [2024-07-24 19:20:54.823589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.054 [2024-07-24 19:20:54.830012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.054 [2024-07-24 19:20:54.830049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.054 [2024-07-24 19:20:54.830068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.054 [2024-07-24 19:20:54.834326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.054 [2024-07-24 19:20:54.834363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.054 [2024-07-24 19:20:54.834381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.054 [2024-07-24 19:20:54.839256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.054 [2024-07-24 19:20:54.839295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.054 [2024-07-24 19:20:54.839314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.054 [2024-07-24 19:20:54.845454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.054 [2024-07-24 19:20:54.845499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.054 [2024-07-24 19:20:54.845519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.054 [2024-07-24 19:20:54.851688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.054 [2024-07-24 19:20:54.851726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.054 [2024-07-24 19:20:54.851745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.054 [2024-07-24 19:20:54.859170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.054 [2024-07-24 19:20:54.859209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.054 [2024-07-24 19:20:54.859228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.054 [2024-07-24 19:20:54.866885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.054 [2024-07-24 19:20:54.866926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.054 [2024-07-24 19:20:54.866945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.054 [2024-07-24 19:20:54.874924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.054 [2024-07-24 19:20:54.874963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.054 [2024-07-24 19:20:54.874995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.054 [2024-07-24 19:20:54.879287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.054 [2024-07-24 19:20:54.879322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.054 [2024-07-24 19:20:54.879341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.054 [2024-07-24 19:20:54.887168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.054 [2024-07-24 19:20:54.887206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.054 [2024-07-24 19:20:54.887225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.054 [2024-07-24 19:20:54.895242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.054 [2024-07-24 19:20:54.895282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.054 [2024-07-24 19:20:54.895301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.054 [2024-07-24 19:20:54.903356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.054 [2024-07-24 19:20:54.903396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.054 [2024-07-24 19:20:54.903414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.054 [2024-07-24 19:20:54.910576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.054 [2024-07-24 19:20:54.910615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.054 [2024-07-24 19:20:54.910633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.054 [2024-07-24 19:20:54.917669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.054 [2024-07-24 19:20:54.917707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.054 [2024-07-24 19:20:54.917725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.054 [2024-07-24 19:20:54.925352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.054 [2024-07-24 19:20:54.925393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.054 [2024-07-24 19:20:54.925412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.054 [2024-07-24 19:20:54.932373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.054 [2024-07-24 19:20:54.932409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.054 [2024-07-24 19:20:54.932427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.054 [2024-07-24 19:20:54.939173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.054 [2024-07-24 19:20:54.939223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.054 [2024-07-24 19:20:54.939242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.054 [2024-07-24 19:20:54.945344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.055 [2024-07-24 19:20:54.945379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.055 [2024-07-24 19:20:54.945396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.055 [2024-07-24 19:20:54.951619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.055 [2024-07-24 19:20:54.951658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.055 [2024-07-24 19:20:54.951676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.055 [2024-07-24 19:20:54.957884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.055 [2024-07-24 19:20:54.957918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.055 [2024-07-24 19:20:54.957937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.055 [2024-07-24 19:20:54.964120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.055 [2024-07-24 19:20:54.964156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.055 [2024-07-24 19:20:54.964174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.055 [2024-07-24 19:20:54.970279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.055 [2024-07-24 19:20:54.970318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.055 [2024-07-24 19:20:54.970336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.055 [2024-07-24 19:20:54.976476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.055 [2024-07-24 19:20:54.976519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.055 [2024-07-24 19:20:54.976537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.055 [2024-07-24 19:20:54.983173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.055 [2024-07-24 19:20:54.983212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.055 [2024-07-24 19:20:54.983230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.055 [2024-07-24 19:20:54.989360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.055 [2024-07-24 19:20:54.989398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.055 [2024-07-24 19:20:54.989416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.055 [2024-07-24 19:20:54.995751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.055 [2024-07-24 19:20:54.995790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.055 [2024-07-24 19:20:54.995809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.055 [2024-07-24 19:20:55.002273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.055 [2024-07-24 19:20:55.002311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.055 [2024-07-24 19:20:55.002330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.055 [2024-07-24 19:20:55.008570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.055 [2024-07-24 19:20:55.008607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.055 [2024-07-24 19:20:55.008626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.055 [2024-07-24 19:20:55.014824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.055 [2024-07-24 19:20:55.014862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.055 [2024-07-24 19:20:55.014880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.055 [2024-07-24 19:20:55.020934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.055 [2024-07-24 19:20:55.020972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.055 [2024-07-24 19:20:55.020990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.055 [2024-07-24 19:20:55.027056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.055 [2024-07-24 19:20:55.027095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.055 [2024-07-24 19:20:55.027114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.055 [2024-07-24 19:20:55.033235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.055 [2024-07-24 19:20:55.033269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.055 [2024-07-24 19:20:55.033287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.055 [2024-07-24 19:20:55.039387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.055 [2024-07-24 19:20:55.039420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.055 [2024-07-24 19:20:55.039438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.055 [2024-07-24 19:20:55.045569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.055 [2024-07-24 19:20:55.045603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.055 [2024-07-24 19:20:55.045631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.055 [2024-07-24 19:20:55.051695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.055 [2024-07-24 19:20:55.051730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.055 [2024-07-24 19:20:55.051749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.055 [2024-07-24 19:20:55.057760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.055 [2024-07-24 19:20:55.057794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.055 [2024-07-24 19:20:55.057813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.055 [2024-07-24 19:20:55.063928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.055 [2024-07-24 19:20:55.063964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.055 [2024-07-24 19:20:55.063984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.314 [2024-07-24 19:20:55.070316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.314 [2024-07-24 19:20:55.070353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.314 [2024-07-24 19:20:55.070373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.076488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.315 [2024-07-24 19:20:55.076523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-07-24 19:20:55.076542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.082627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.315 [2024-07-24 19:20:55.082662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-07-24 19:20:55.082681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.088752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.315 [2024-07-24 19:20:55.088786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-07-24 19:20:55.088804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.094888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.315 [2024-07-24 19:20:55.094923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-07-24 19:20:55.094942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.100998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.315 [2024-07-24 19:20:55.101040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-07-24 19:20:55.101058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.107156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.315 [2024-07-24 19:20:55.107190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-07-24 19:20:55.107208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.113345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.315 [2024-07-24 19:20:55.113379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-07-24 19:20:55.113398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.119591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.315 [2024-07-24 19:20:55.119623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-07-24 19:20:55.119641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.125679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.315 [2024-07-24 19:20:55.125713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-07-24 19:20:55.125731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.131731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.315 [2024-07-24 19:20:55.131764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-07-24 19:20:55.131782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.137814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.315 [2024-07-24 19:20:55.137846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-07-24 19:20:55.137865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.143832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.315 [2024-07-24 19:20:55.143866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-07-24 19:20:55.143885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.149967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.315 [2024-07-24 19:20:55.150000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-07-24 19:20:55.150019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.156014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.315 [2024-07-24 19:20:55.156047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-07-24 19:20:55.156066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.162297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.315 [2024-07-24 19:20:55.162330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-07-24 19:20:55.162348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.168428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.315 [2024-07-24 19:20:55.168462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-07-24 19:20:55.168489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.174677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.315 [2024-07-24 19:20:55.174711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-07-24 19:20:55.174729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.180850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.315 [2024-07-24 19:20:55.180883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-07-24 19:20:55.180901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.187076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.315 [2024-07-24 19:20:55.187109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-07-24 19:20:55.187127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.193223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.315 [2024-07-24 19:20:55.193256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-07-24 19:20:55.193275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.199430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.315 [2024-07-24 19:20:55.199463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-07-24 19:20:55.199487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.205714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.315 [2024-07-24 19:20:55.205747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.315 [2024-07-24 19:20:55.205777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.315 [2024-07-24 19:20:55.211947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.316 [2024-07-24 19:20:55.211980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-07-24 19:20:55.211998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.316 [2024-07-24 19:20:55.218098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.316 [2024-07-24 19:20:55.218132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-07-24 19:20:55.218151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.316 [2024-07-24 19:20:55.224334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.316 [2024-07-24 19:20:55.224367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-07-24 19:20:55.224386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.316 [2024-07-24 19:20:55.230460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.316 [2024-07-24 19:20:55.230501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-07-24 19:20:55.230520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.316 [2024-07-24 19:20:55.236666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.316 [2024-07-24 19:20:55.236700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-07-24 19:20:55.236718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.316 [2024-07-24 19:20:55.242784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.316 [2024-07-24 19:20:55.242819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-07-24 19:20:55.242836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.316 [2024-07-24 19:20:55.248959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.316 [2024-07-24 19:20:55.248992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-07-24 19:20:55.249010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.316 [2024-07-24 19:20:55.255156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.316 [2024-07-24 19:20:55.255191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-07-24 19:20:55.255209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.316 [2024-07-24 19:20:55.261381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.316 [2024-07-24 19:20:55.261416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-07-24 19:20:55.261434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.316 [2024-07-24 19:20:55.267621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.316 [2024-07-24 19:20:55.267658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-07-24 19:20:55.267676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.316 [2024-07-24 19:20:55.273805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.316 [2024-07-24 19:20:55.273842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-07-24 19:20:55.273860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.316 [2024-07-24 19:20:55.279965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.316 [2024-07-24 19:20:55.280000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-07-24 19:20:55.280019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.316 [2024-07-24 19:20:55.286165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.316 [2024-07-24 19:20:55.286199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-07-24 19:20:55.286217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.316 [2024-07-24 19:20:55.292311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.316 [2024-07-24 19:20:55.292343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-07-24 19:20:55.292362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.316 [2024-07-24 19:20:55.298472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.316 [2024-07-24 19:20:55.298511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-07-24 19:20:55.298530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.316 [2024-07-24 19:20:55.304580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.316 [2024-07-24 19:20:55.304613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-07-24 19:20:55.304631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.316 [2024-07-24 19:20:55.310720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.316 [2024-07-24 19:20:55.310754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-07-24 19:20:55.310779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.316 [2024-07-24 19:20:55.316883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.316 [2024-07-24 19:20:55.316915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-07-24 19:20:55.316933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.316 [2024-07-24 19:20:55.323086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.316 [2024-07-24 19:20:55.323127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.316 [2024-07-24 19:20:55.323145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.576 [2024-07-24 19:20:55.329449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.576 [2024-07-24 19:20:55.329496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.576 [2024-07-24 19:20:55.329518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.576 [2024-07-24 19:20:55.335694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.576 [2024-07-24 19:20:55.335731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.576 [2024-07-24 19:20:55.335751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.576 [2024-07-24 19:20:55.341878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.576 [2024-07-24 19:20:55.341914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.576 [2024-07-24 19:20:55.341933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.576 [2024-07-24 19:20:55.348013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.576 [2024-07-24 19:20:55.348048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.576 [2024-07-24 19:20:55.348066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.576 [2024-07-24 19:20:55.355693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.576 [2024-07-24 19:20:55.355730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.576 [2024-07-24 19:20:55.355749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.576 [2024-07-24 19:20:55.363868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.576 [2024-07-24 19:20:55.363904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.576 [2024-07-24 19:20:55.363923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.576 [2024-07-24 19:20:55.372224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.576 [2024-07-24 19:20:55.372277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.576 [2024-07-24 19:20:55.372296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.576 [2024-07-24 19:20:55.380291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.576 [2024-07-24 19:20:55.380327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.576 [2024-07-24 19:20:55.380345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.576 [2024-07-24 19:20:55.388284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.576 [2024-07-24 19:20:55.388321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.576 [2024-07-24 19:20:55.388339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.576 [2024-07-24 19:20:55.396638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.576 [2024-07-24 19:20:55.396673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.576 [2024-07-24 19:20:55.396692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.576 [2024-07-24 19:20:55.404764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.576 [2024-07-24 19:20:55.404800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.576 [2024-07-24 19:20:55.404818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.576 [2024-07-24 19:20:55.412755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.576 [2024-07-24 19:20:55.412791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.576 [2024-07-24 19:20:55.412809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.576 [2024-07-24 19:20:55.421289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.576 [2024-07-24 19:20:55.421328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.576 [2024-07-24 19:20:55.421348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.576 [2024-07-24 19:20:55.429572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.576 [2024-07-24 19:20:55.429609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.576 [2024-07-24 19:20:55.429628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.576 [2024-07-24 19:20:55.437618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.576 [2024-07-24 19:20:55.437654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.576 [2024-07-24 19:20:55.437672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.576 [2024-07-24 19:20:55.445629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.576 [2024-07-24 19:20:55.445665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.576 [2024-07-24 19:20:55.445684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.576 [2024-07-24 19:20:55.454010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.576 [2024-07-24 19:20:55.454046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.576 [2024-07-24 19:20:55.454065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.576 [2024-07-24 19:20:55.462101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.576 [2024-07-24 19:20:55.462137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.576 [2024-07-24 19:20:55.462156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.576 [2024-07-24 19:20:55.470084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.576 [2024-07-24 19:20:55.470119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.576 [2024-07-24 19:20:55.470137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.576 [2024-07-24 19:20:55.477832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.576 [2024-07-24 19:20:55.477877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.576 [2024-07-24 19:20:55.477896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.576 [2024-07-24 19:20:55.485980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.576 [2024-07-24 19:20:55.486015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.576 [2024-07-24 19:20:55.486034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.577 [2024-07-24 19:20:55.494010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.577 [2024-07-24 19:20:55.494045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.577 [2024-07-24 19:20:55.494063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.577 [2024-07-24 19:20:55.501988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.577 [2024-07-24 19:20:55.502024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.577 [2024-07-24 19:20:55.502043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.577 [2024-07-24 19:20:55.509279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.577 [2024-07-24 19:20:55.509314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.577 [2024-07-24 19:20:55.509340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.577 [2024-07-24 19:20:55.516605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.577 [2024-07-24 19:20:55.516644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.577 [2024-07-24 19:20:55.516663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.577 [2024-07-24 19:20:55.524032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.577 [2024-07-24 19:20:55.524068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.577 [2024-07-24 19:20:55.524087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.577 [2024-07-24 19:20:55.531371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.577 [2024-07-24 19:20:55.531407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.577 [2024-07-24 19:20:55.531425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.577 [2024-07-24 19:20:55.535493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.577 [2024-07-24 19:20:55.535524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.577 [2024-07-24 19:20:55.535543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.577 [2024-07-24 19:20:55.542371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.577 [2024-07-24 19:20:55.542405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.577 [2024-07-24 19:20:55.542423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.577 [2024-07-24 19:20:55.549934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.577 [2024-07-24 19:20:55.549968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.577 [2024-07-24 19:20:55.549986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.577 [2024-07-24 19:20:55.558011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.577 [2024-07-24 19:20:55.558047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.577 [2024-07-24 19:20:55.558066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.577 [2024-07-24 19:20:55.565454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.577 [2024-07-24 19:20:55.565496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.577 [2024-07-24 19:20:55.565517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.577 [2024-07-24 19:20:55.572325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.577 [2024-07-24 19:20:55.572366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.577 [2024-07-24 19:20:55.572386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.577 [2024-07-24 19:20:55.579496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.577 [2024-07-24 19:20:55.579531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.577 [2024-07-24 19:20:55.579550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.577 [2024-07-24 19:20:55.587787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.577 [2024-07-24 19:20:55.587824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.577 [2024-07-24 19:20:55.587844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.837 [2024-07-24 19:20:55.595373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.837 [2024-07-24 19:20:55.595413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.837 [2024-07-24 19:20:55.595432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.837 [2024-07-24 19:20:55.602175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.837 [2024-07-24 19:20:55.602210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.837 [2024-07-24 19:20:55.602229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.837 [2024-07-24 19:20:55.609430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.837 [2024-07-24 19:20:55.609467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.837 [2024-07-24 19:20:55.609493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.837 [2024-07-24 19:20:55.617392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.837 [2024-07-24 19:20:55.617435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.837 [2024-07-24 19:20:55.617454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.837 [2024-07-24 19:20:55.625440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.837 [2024-07-24 19:20:55.625476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.837 [2024-07-24 19:20:55.625504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.837 [2024-07-24 19:20:55.633259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.837 [2024-07-24 19:20:55.633304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.837 [2024-07-24 19:20:55.633323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.837 [2024-07-24 19:20:55.640555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.837 [2024-07-24 19:20:55.640593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.837 [2024-07-24 19:20:55.640612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.837 [2024-07-24 19:20:55.648277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.837 [2024-07-24 19:20:55.648314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.837 [2024-07-24 19:20:55.648333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.837 [2024-07-24 19:20:55.656557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.837 [2024-07-24 19:20:55.656593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.837 [2024-07-24 19:20:55.656612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.837 [2024-07-24 19:20:55.665938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.837 [2024-07-24 19:20:55.665976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.837 [2024-07-24 19:20:55.665994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.837 [2024-07-24 19:20:55.674415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.837 [2024-07-24 19:20:55.674451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.837 [2024-07-24 19:20:55.674470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.837 [2024-07-24 19:20:55.682627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.837 [2024-07-24 19:20:55.682664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.837 [2024-07-24 19:20:55.682683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.837 [2024-07-24 19:20:55.689499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.837 [2024-07-24 19:20:55.689532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.837 [2024-07-24 19:20:55.689551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.837 [2024-07-24 19:20:55.696793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.837 [2024-07-24 19:20:55.696830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.837 [2024-07-24 19:20:55.696850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.837 [2024-07-24 19:20:55.703821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.837 [2024-07-24 19:20:55.703857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.837 [2024-07-24 19:20:55.703884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.837 [2024-07-24 19:20:55.711211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.837 [2024-07-24 19:20:55.711248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.837 [2024-07-24 19:20:55.711267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.837 [2024-07-24 19:20:55.718383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.837 [2024-07-24 19:20:55.718420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.837 [2024-07-24 19:20:55.718438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.837 [2024-07-24 19:20:55.726237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.837 [2024-07-24 19:20:55.726275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.837 [2024-07-24 19:20:55.726293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.837 [2024-07-24 19:20:55.733914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.837 [2024-07-24 19:20:55.733953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.837 [2024-07-24 19:20:55.733971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.837 [2024-07-24 19:20:55.741699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.837 [2024-07-24 19:20:55.741733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.837 [2024-07-24 19:20:55.741752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.837 [2024-07-24 19:20:55.749291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.837 [2024-07-24 19:20:55.749334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.838 [2024-07-24 19:20:55.749352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.838 [2024-07-24 19:20:55.756982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.838 [2024-07-24 19:20:55.757019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.838 [2024-07-24 19:20:55.757038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.838 [2024-07-24 19:20:55.764027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.838 [2024-07-24 19:20:55.764063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.838 [2024-07-24 19:20:55.764082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.838 [2024-07-24 19:20:55.771319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.838 [2024-07-24 19:20:55.771355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.838 [2024-07-24 19:20:55.771374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.838 [2024-07-24 19:20:55.778811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.838 [2024-07-24 19:20:55.778848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.838 [2024-07-24 19:20:55.778867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.838 [2024-07-24 19:20:55.786397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.838 [2024-07-24 19:20:55.786449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.838 [2024-07-24 19:20:55.786468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.838 [2024-07-24 19:20:55.793982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.838 [2024-07-24 19:20:55.794015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.838 [2024-07-24 19:20:55.794034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.838 [2024-07-24 19:20:55.801374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.838 [2024-07-24 19:20:55.801410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.838 [2024-07-24 19:20:55.801429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.838 [2024-07-24 19:20:55.808445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.838 [2024-07-24 19:20:55.808493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.838 [2024-07-24 19:20:55.808514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.838 [2024-07-24 19:20:55.815968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.838 [2024-07-24 19:20:55.816003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.838 [2024-07-24 19:20:55.816022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.838 [2024-07-24 19:20:55.823227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.838 [2024-07-24 19:20:55.823262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.838 [2024-07-24 19:20:55.823280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.838 [2024-07-24 19:20:55.830604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.838 [2024-07-24 19:20:55.830639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.838 [2024-07-24 19:20:55.830666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.838 [2024-07-24 19:20:55.837813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.838 [2024-07-24 19:20:55.837850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.838 [2024-07-24 19:20:55.837868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.838 [2024-07-24 19:20:55.844933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:49.838 [2024-07-24 19:20:55.844966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.838 [2024-07-24 19:20:55.844985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.096 [2024-07-24 19:20:55.852490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:50.096 [2024-07-24 19:20:55.852528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.096 [2024-07-24 19:20:55.852547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.096 [2024-07-24 19:20:55.859863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:50.096 [2024-07-24 19:20:55.859904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.096 [2024-07-24 19:20:55.859924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.096 [2024-07-24 19:20:55.868210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:50.096 [2024-07-24 19:20:55.868247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.096 [2024-07-24 19:20:55.868267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.096 [2024-07-24 19:20:55.876148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19439d0) 00:23:50.096 [2024-07-24 19:20:55.876185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.096 [2024-07-24 19:20:55.876205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.096 00:23:50.096 Latency(us) 00:23:50.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.096 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:50.096 nvme0n1 : 2.00 4422.53 552.82 0.00 0.00 3612.88 740.31 12815.93 00:23:50.096 =================================================================================================================== 00:23:50.097 Total : 4422.53 552.82 0.00 0.00 3612.88 740.31 12815.93 00:23:50.097 0 00:23:50.097 19:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:50.097 19:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:50.097 19:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:50.097 | .driver_specific 00:23:50.097 | .nvme_error 00:23:50.097 | .status_code 00:23:50.097 | .command_transient_transport_error' 00:23:50.097 19:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:50.355 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 285 > 0 )) 00:23:50.355 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2638531 00:23:50.355 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2638531 ']' 00:23:50.355 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2638531 00:23:50.355 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:23:50.355 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:50.355 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2638531 00:23:50.355 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:50.355 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:50.355 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2638531' 00:23:50.355 killing process with pid 2638531 00:23:50.355 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2638531 00:23:50.355 Received shutdown signal, test time was about 2.000000 seconds 00:23:50.355 00:23:50.355 Latency(us) 00:23:50.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.355 =================================================================================================================== 00:23:50.355 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:50.355 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2638531 00:23:50.613 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:23:50.613 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:50.613 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:23:50.613 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:50.613 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:50.613 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2638845 00:23:50.613 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2638845 /var/tmp/bperf.sock 00:23:50.613 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:50.613 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2638845 ']' 00:23:50.613 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:50.613 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:50.613 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:50.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:50.613 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:50.614 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:50.614 [2024-07-24 19:20:56.477551] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:23:50.614 [2024-07-24 19:20:56.477654] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2638845 ] 00:23:50.614 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.614 [2024-07-24 19:20:56.537610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.872 [2024-07-24 19:20:56.654759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.872 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:50.872 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:50.872 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:50.872 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:51.130 19:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:51.130 19:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.130 19:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:51.130 19:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.130 19:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:51.130 19:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:51.696 nvme0n1 00:23:51.696 19:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:51.696 19:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.696 19:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:51.696 19:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.696 19:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:51.696 19:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:51.696 Running I/O for 2 seconds... 00:23:51.696 [2024-07-24 19:20:57.624767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190e38d0 00:23:51.696 [2024-07-24 19:20:57.626450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.696 [2024-07-24 19:20:57.626500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:51.696 [2024-07-24 19:20:57.639383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190fac10 00:23:51.696 [2024-07-24 19:20:57.641289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.696 [2024-07-24 19:20:57.641323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:51.696 [2024-07-24 19:20:57.653794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f6890 00:23:51.696 [2024-07-24 19:20:57.655886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.696 [2024-07-24 19:20:57.655927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:51.696 [2024-07-24 19:20:57.668184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190ee5c8 00:23:51.696 [2024-07-24 19:20:57.670502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.696 [2024-07-24 19:20:57.670534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:51.696 [2024-07-24 19:20:57.677997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190e12d8 00:23:51.696 [2024-07-24 19:20:57.678943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.696 [2024-07-24 19:20:57.678975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:51.696 [2024-07-24 19:20:57.692355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190efae0 00:23:51.696 [2024-07-24 19:20:57.693489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.696 [2024-07-24 19:20:57.693521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:51.696 [2024-07-24 19:20:57.706759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190e1710 00:23:51.696 [2024-07-24 19:20:57.708126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.696 [2024-07-24 19:20:57.708159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:51.955 [2024-07-24 19:20:57.721621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190ef270 00:23:51.955 [2024-07-24 19:20:57.723168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.955 [2024-07-24 19:20:57.723202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:51.955 [2024-07-24 19:20:57.736136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190e0ea0 00:23:51.955 [2024-07-24 19:20:57.737865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.955 [2024-07-24 19:20:57.737897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:51.955 [2024-07-24 19:20:57.747745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f5be8 00:23:51.955 [2024-07-24 19:20:57.748501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.955 [2024-07-24 19:20:57.748532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:51.955 [2024-07-24 19:20:57.762173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:51.955 [2024-07-24 19:20:57.762969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.955 [2024-07-24 19:20:57.763000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:51.955 [2024-07-24 19:20:57.776843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:51.955 [2024-07-24 19:20:57.777075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.955 [2024-07-24 19:20:57.777106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:51.955 [2024-07-24 19:20:57.791610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:51.955 [2024-07-24 19:20:57.791837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.955 [2024-07-24 19:20:57.791867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:51.955 [2024-07-24 19:20:57.806282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:51.955 [2024-07-24 19:20:57.806514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.955 [2024-07-24 19:20:57.806544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:51.955 [2024-07-24 19:20:57.820983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:51.955 [2024-07-24 19:20:57.821204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.955 [2024-07-24 19:20:57.821233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:51.955 [2024-07-24 19:20:57.835641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:51.955 [2024-07-24 19:20:57.835866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.955 [2024-07-24 19:20:57.835895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:51.955 [2024-07-24 19:20:57.850320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:51.955 [2024-07-24 19:20:57.850552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.956 [2024-07-24 19:20:57.850581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:51.956 [2024-07-24 19:20:57.864994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:51.956 [2024-07-24 19:20:57.865215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.956 [2024-07-24 19:20:57.865244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:51.956 [2024-07-24 19:20:57.879669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:51.956 [2024-07-24 19:20:57.879895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.956 [2024-07-24 19:20:57.879925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:51.956 [2024-07-24 19:20:57.894384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:51.956 [2024-07-24 19:20:57.894619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.956 [2024-07-24 19:20:57.894648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:51.956 [2024-07-24 19:20:57.909084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:51.956 [2024-07-24 19:20:57.909308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.956 [2024-07-24 19:20:57.909337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:51.956 [2024-07-24 19:20:57.923748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:51.956 [2024-07-24 19:20:57.923970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.956 [2024-07-24 19:20:57.923998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:51.956 [2024-07-24 19:20:57.938387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:51.956 [2024-07-24 19:20:57.938619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.956 [2024-07-24 19:20:57.938648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:51.956 [2024-07-24 19:20:57.953027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:51.956 [2024-07-24 19:20:57.953250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.956 [2024-07-24 19:20:57.953279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:51.956 [2024-07-24 19:20:57.967895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:51.956 [2024-07-24 19:20:57.968124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.956 [2024-07-24 19:20:57.968156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.214 [2024-07-24 19:20:57.982842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.214 [2024-07-24 19:20:57.983071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.214 [2024-07-24 19:20:57.983103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.214 [2024-07-24 19:20:57.997566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.214 [2024-07-24 19:20:57.997797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.214 [2024-07-24 19:20:57.997828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.214 [2024-07-24 19:20:58.012174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.214 [2024-07-24 19:20:58.012399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.214 [2024-07-24 19:20:58.012429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.214 [2024-07-24 19:20:58.026892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.214 [2024-07-24 19:20:58.027116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.214 [2024-07-24 19:20:58.027152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.214 [2024-07-24 19:20:58.041560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.214 [2024-07-24 19:20:58.041787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.214 [2024-07-24 19:20:58.041817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.214 [2024-07-24 19:20:58.056266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.214 [2024-07-24 19:20:58.056495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.214 [2024-07-24 19:20:58.056524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.214 [2024-07-24 19:20:58.070964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.214 [2024-07-24 19:20:58.071190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.214 [2024-07-24 19:20:58.071219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.215 [2024-07-24 19:20:58.085622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.215 [2024-07-24 19:20:58.085852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.215 [2024-07-24 19:20:58.085882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.215 [2024-07-24 19:20:58.100318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.215 [2024-07-24 19:20:58.100588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.215 [2024-07-24 19:20:58.100618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.215 [2024-07-24 19:20:58.115042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.215 [2024-07-24 19:20:58.115264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.215 [2024-07-24 19:20:58.115293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.215 [2024-07-24 19:20:58.129704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.215 [2024-07-24 19:20:58.129929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.215 [2024-07-24 19:20:58.129958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.215 [2024-07-24 19:20:58.144476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.215 [2024-07-24 19:20:58.144708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.215 [2024-07-24 19:20:58.144737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.215 [2024-07-24 19:20:58.159143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.215 [2024-07-24 19:20:58.159375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.215 [2024-07-24 19:20:58.159404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.215 [2024-07-24 19:20:58.173795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.215 [2024-07-24 19:20:58.174017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.215 [2024-07-24 19:20:58.174045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.215 [2024-07-24 19:20:58.188436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.215 [2024-07-24 19:20:58.188667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.215 [2024-07-24 19:20:58.188696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.215 [2024-07-24 19:20:58.203134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.215 [2024-07-24 19:20:58.203356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.215 [2024-07-24 19:20:58.203385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.215 [2024-07-24 19:20:58.217777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.215 [2024-07-24 19:20:58.218001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.215 [2024-07-24 19:20:58.218030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.473 [2024-07-24 19:20:58.232931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.473 [2024-07-24 19:20:58.233156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.473 [2024-07-24 19:20:58.233189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.473 [2024-07-24 19:20:58.247568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.473 [2024-07-24 19:20:58.247792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.473 [2024-07-24 19:20:58.247822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.473 [2024-07-24 19:20:58.262229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.473 [2024-07-24 19:20:58.262451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.473 [2024-07-24 19:20:58.262490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.473 [2024-07-24 19:20:58.276856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.473 [2024-07-24 19:20:58.277080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.473 [2024-07-24 19:20:58.277109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.473 [2024-07-24 19:20:58.291532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.473 [2024-07-24 19:20:58.291758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.473 [2024-07-24 19:20:58.291789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.474 [2024-07-24 19:20:58.306214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.474 [2024-07-24 19:20:58.306438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.474 [2024-07-24 19:20:58.306467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.474 [2024-07-24 19:20:58.320907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.474 [2024-07-24 19:20:58.321133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.474 [2024-07-24 19:20:58.321162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.474 [2024-07-24 19:20:58.335608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.474 [2024-07-24 19:20:58.335834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.474 [2024-07-24 19:20:58.335863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.474 [2024-07-24 19:20:58.350270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.474 [2024-07-24 19:20:58.350500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.474 [2024-07-24 19:20:58.350529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.474 [2024-07-24 19:20:58.364912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.474 [2024-07-24 19:20:58.365137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.474 [2024-07-24 19:20:58.365166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.474 [2024-07-24 19:20:58.379545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.474 [2024-07-24 19:20:58.379771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.474 [2024-07-24 19:20:58.379800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.474 [2024-07-24 19:20:58.394277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.474 [2024-07-24 19:20:58.394506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.474 [2024-07-24 19:20:58.394534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.474 [2024-07-24 19:20:58.408917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.474 [2024-07-24 19:20:58.409141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.474 [2024-07-24 19:20:58.409175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.474 [2024-07-24 19:20:58.423592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.474 [2024-07-24 19:20:58.423814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.474 [2024-07-24 19:20:58.423842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.474 [2024-07-24 19:20:58.438249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.474 [2024-07-24 19:20:58.438473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.474 [2024-07-24 19:20:58.438510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.474 [2024-07-24 19:20:58.452934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.474 [2024-07-24 19:20:58.453156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.474 [2024-07-24 19:20:58.453186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.474 [2024-07-24 19:20:58.467586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.474 [2024-07-24 19:20:58.467817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.474 [2024-07-24 19:20:58.467846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.474 [2024-07-24 19:20:58.482200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.474 [2024-07-24 19:20:58.482430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.474 [2024-07-24 19:20:58.482460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.733 [2024-07-24 19:20:58.497378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.733 [2024-07-24 19:20:58.497613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.733 [2024-07-24 19:20:58.497646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.733 [2024-07-24 19:20:58.512034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.733 [2024-07-24 19:20:58.512258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.733 [2024-07-24 19:20:58.512289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.733 [2024-07-24 19:20:58.526709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.733 [2024-07-24 19:20:58.526932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.733 [2024-07-24 19:20:58.526962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.733 [2024-07-24 19:20:58.541383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.733 [2024-07-24 19:20:58.541629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.733 [2024-07-24 19:20:58.541659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.733 [2024-07-24 19:20:58.556124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.733 [2024-07-24 19:20:58.556348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.733 [2024-07-24 19:20:58.556386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.733 [2024-07-24 19:20:58.570761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.733 [2024-07-24 19:20:58.570987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.733 [2024-07-24 19:20:58.571017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.733 [2024-07-24 19:20:58.585358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.733 [2024-07-24 19:20:58.585590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.733 [2024-07-24 19:20:58.585620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.733 [2024-07-24 19:20:58.600007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.733 [2024-07-24 19:20:58.600231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.733 [2024-07-24 19:20:58.600259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.733 [2024-07-24 19:20:58.614728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.733 [2024-07-24 19:20:58.614953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.733 [2024-07-24 19:20:58.614982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.733 [2024-07-24 19:20:58.629354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.733 [2024-07-24 19:20:58.629590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.733 [2024-07-24 19:20:58.629618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.733 [2024-07-24 19:20:58.644192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.733 [2024-07-24 19:20:58.644421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.733 [2024-07-24 19:20:58.644450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.733 [2024-07-24 19:20:58.658868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.733 [2024-07-24 19:20:58.659092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.733 [2024-07-24 19:20:58.659121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.733 [2024-07-24 19:20:58.673492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.733 [2024-07-24 19:20:58.673717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.733 [2024-07-24 19:20:58.673745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.733 [2024-07-24 19:20:58.688125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.733 [2024-07-24 19:20:58.688349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.733 [2024-07-24 19:20:58.688379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.733 [2024-07-24 19:20:58.702809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.733 [2024-07-24 19:20:58.703032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.733 [2024-07-24 19:20:58.703062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.733 [2024-07-24 19:20:58.717488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.733 [2024-07-24 19:20:58.717714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.733 [2024-07-24 19:20:58.717743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.733 [2024-07-24 19:20:58.732161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.733 [2024-07-24 19:20:58.732383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.733 [2024-07-24 19:20:58.732412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.992 [2024-07-24 19:20:58.747257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.992 [2024-07-24 19:20:58.747501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.992 [2024-07-24 19:20:58.747541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.992 [2024-07-24 19:20:58.762108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.992 [2024-07-24 19:20:58.762335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.992 [2024-07-24 19:20:58.762367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.992 [2024-07-24 19:20:58.776792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.992 [2024-07-24 19:20:58.777018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.992 [2024-07-24 19:20:58.777056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.992 [2024-07-24 19:20:58.791486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.992 [2024-07-24 19:20:58.791713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.992 [2024-07-24 19:20:58.791748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.992 [2024-07-24 19:20:58.806197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.992 [2024-07-24 19:20:58.806422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.992 [2024-07-24 19:20:58.806452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.992 [2024-07-24 19:20:58.820950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.992 [2024-07-24 19:20:58.821175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.992 [2024-07-24 19:20:58.821205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.992 [2024-07-24 19:20:58.835615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.992 [2024-07-24 19:20:58.835846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.992 [2024-07-24 19:20:58.835876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.992 [2024-07-24 19:20:58.850305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.992 [2024-07-24 19:20:58.850539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.992 [2024-07-24 19:20:58.850569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.992 [2024-07-24 19:20:58.865052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.992 [2024-07-24 19:20:58.865277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.992 [2024-07-24 19:20:58.865306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.992 [2024-07-24 19:20:58.879732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.992 [2024-07-24 19:20:58.879958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.992 [2024-07-24 19:20:58.879987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.992 [2024-07-24 19:20:58.894565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.992 [2024-07-24 19:20:58.894798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.992 [2024-07-24 19:20:58.894827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.992 [2024-07-24 19:20:58.909245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.992 [2024-07-24 19:20:58.909467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.992 [2024-07-24 19:20:58.909504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.992 [2024-07-24 19:20:58.923930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.992 [2024-07-24 19:20:58.924168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.992 [2024-07-24 19:20:58.924197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.992 [2024-07-24 19:20:58.938616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.992 [2024-07-24 19:20:58.938844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.992 [2024-07-24 19:20:58.938873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.992 [2024-07-24 19:20:58.953259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.992 [2024-07-24 19:20:58.953488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.992 [2024-07-24 19:20:58.953526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.992 [2024-07-24 19:20:58.967899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.992 [2024-07-24 19:20:58.968121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.992 [2024-07-24 19:20:58.968150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.992 [2024-07-24 19:20:58.982594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.992 [2024-07-24 19:20:58.982818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.992 [2024-07-24 19:20:58.982847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:52.992 [2024-07-24 19:20:58.997237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:52.992 [2024-07-24 19:20:58.997459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.992 [2024-07-24 19:20:58.997495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.251 [2024-07-24 19:20:59.012371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.251 [2024-07-24 19:20:59.012621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.251 [2024-07-24 19:20:59.012653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.251 [2024-07-24 19:20:59.027071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.251 [2024-07-24 19:20:59.027302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.251 [2024-07-24 19:20:59.027332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.251 [2024-07-24 19:20:59.041688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.251 [2024-07-24 19:20:59.041914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.251 [2024-07-24 19:20:59.041945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.251 [2024-07-24 19:20:59.056344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.251 [2024-07-24 19:20:59.056577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.251 [2024-07-24 19:20:59.056606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.251 [2024-07-24 19:20:59.070960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.251 [2024-07-24 19:20:59.071183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.251 [2024-07-24 19:20:59.071212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.251 [2024-07-24 19:20:59.085617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.251 [2024-07-24 19:20:59.085840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.251 [2024-07-24 19:20:59.085870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.251 [2024-07-24 19:20:59.100273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.251 [2024-07-24 19:20:59.100506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.251 [2024-07-24 19:20:59.100535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.251 [2024-07-24 19:20:59.114988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.251 [2024-07-24 19:20:59.115211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.251 [2024-07-24 19:20:59.115240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.251 [2024-07-24 19:20:59.129667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.251 [2024-07-24 19:20:59.129891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.251 [2024-07-24 19:20:59.129920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.251 [2024-07-24 19:20:59.144384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.251 [2024-07-24 19:20:59.144627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.251 [2024-07-24 19:20:59.144657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.251 [2024-07-24 19:20:59.159165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.251 [2024-07-24 19:20:59.159389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.251 [2024-07-24 19:20:59.159418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.251 [2024-07-24 19:20:59.173869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.251 [2024-07-24 19:20:59.174095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.251 [2024-07-24 19:20:59.174137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.251 [2024-07-24 19:20:59.188554] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.251 [2024-07-24 19:20:59.188778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.251 [2024-07-24 19:20:59.188807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.251 [2024-07-24 19:20:59.203170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.251 [2024-07-24 19:20:59.203396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.251 [2024-07-24 19:20:59.203425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.251 [2024-07-24 19:20:59.217848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.251 [2024-07-24 19:20:59.218079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.251 [2024-07-24 19:20:59.218107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.251 [2024-07-24 19:20:59.232444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.251 [2024-07-24 19:20:59.232684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.251 [2024-07-24 19:20:59.232714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.251 [2024-07-24 19:20:59.247096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.251 [2024-07-24 19:20:59.247318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.251 [2024-07-24 19:20:59.247346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.251 [2024-07-24 19:20:59.261845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.251 [2024-07-24 19:20:59.262074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.251 [2024-07-24 19:20:59.262105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.510 [2024-07-24 19:20:59.276826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.510 [2024-07-24 19:20:59.277049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.510 [2024-07-24 19:20:59.277082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.510 [2024-07-24 19:20:59.291502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.510 [2024-07-24 19:20:59.291731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.510 [2024-07-24 19:20:59.291761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.510 [2024-07-24 19:20:59.306080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.510 [2024-07-24 19:20:59.306320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.510 [2024-07-24 19:20:59.306349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.510 [2024-07-24 19:20:59.320808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.510 [2024-07-24 19:20:59.321032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.510 [2024-07-24 19:20:59.321061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.510 [2024-07-24 19:20:59.335468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.510 [2024-07-24 19:20:59.335703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.510 [2024-07-24 19:20:59.335734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.510 [2024-07-24 19:20:59.350158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.510 [2024-07-24 19:20:59.350381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.510 [2024-07-24 19:20:59.350410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.510 [2024-07-24 19:20:59.364838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.510 [2024-07-24 19:20:59.365060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.510 [2024-07-24 19:20:59.365089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.510 [2024-07-24 19:20:59.379533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.510 [2024-07-24 19:20:59.379775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.510 [2024-07-24 19:20:59.379806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.510 [2024-07-24 19:20:59.394305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.510 [2024-07-24 19:20:59.394545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.510 [2024-07-24 19:20:59.394578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.510 [2024-07-24 19:20:59.409166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.510 [2024-07-24 19:20:59.409393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.510 [2024-07-24 19:20:59.409424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.510 [2024-07-24 19:20:59.424002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.510 [2024-07-24 19:20:59.424243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.511 [2024-07-24 19:20:59.424273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.511 [2024-07-24 19:20:59.438731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.511 [2024-07-24 19:20:59.438957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.511 [2024-07-24 19:20:59.438986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.511 [2024-07-24 19:20:59.453529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.511 [2024-07-24 19:20:59.453758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.511 [2024-07-24 19:20:59.453788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.511 [2024-07-24 19:20:59.468235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.511 [2024-07-24 19:20:59.468469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.511 [2024-07-24 19:20:59.468506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.511 [2024-07-24 19:20:59.483005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.511 [2024-07-24 19:20:59.483231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.511 [2024-07-24 19:20:59.483260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.511 [2024-07-24 19:20:59.497725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.511 [2024-07-24 19:20:59.497947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.511 [2024-07-24 19:20:59.497977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.511 [2024-07-24 19:20:59.512454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.511 [2024-07-24 19:20:59.512693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.511 [2024-07-24 19:20:59.512723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.769 [2024-07-24 19:20:59.527704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.769 [2024-07-24 19:20:59.527934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.769 [2024-07-24 19:20:59.527967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.769 [2024-07-24 19:20:59.542451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.769 [2024-07-24 19:20:59.542685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.769 [2024-07-24 19:20:59.542716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.769 [2024-07-24 19:20:59.557234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.769 [2024-07-24 19:20:59.557466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.769 [2024-07-24 19:20:59.557513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.769 [2024-07-24 19:20:59.571946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.769 [2024-07-24 19:20:59.572174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.769 [2024-07-24 19:20:59.572204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.769 [2024-07-24 19:20:59.586682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.769 [2024-07-24 19:20:59.586908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.769 [2024-07-24 19:20:59.586937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.769 [2024-07-24 19:20:59.601384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef4d0) with pdu=0x2000190f0788 00:23:53.769 [2024-07-24 19:20:59.601618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.769 [2024-07-24 19:20:59.601648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:53.769 00:23:53.769 Latency(us) 00:23:53.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.769 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:53.769 nvme0n1 : 2.01 17391.18 67.93 0.00 0.00 7341.43 3543.80 16408.27 00:23:53.769 =================================================================================================================== 00:23:53.769 Total : 17391.18 67.93 0.00 0.00 7341.43 3543.80 16408.27 00:23:53.769 0 00:23:53.769 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:53.769 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:53.769 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:53.769 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:53.769 | .driver_specific 00:23:53.769 | .nvme_error 00:23:53.769 | .status_code 00:23:53.769 | .command_transient_transport_error' 00:23:54.029 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 136 > 0 )) 00:23:54.029 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2638845 00:23:54.029 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2638845 ']' 00:23:54.029 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2638845 00:23:54.029 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:23:54.029 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:54.029 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2638845 00:23:54.029 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:54.029 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:54.029 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2638845' 00:23:54.029 killing process with pid 2638845 00:23:54.029 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2638845 00:23:54.029 Received shutdown signal, test time was about 2.000000 seconds 00:23:54.029 00:23:54.029 Latency(us) 00:23:54.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.029 =================================================================================================================== 00:23:54.029 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.029 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2638845 00:23:54.288 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:23:54.288 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:54.288 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:23:54.288 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:23:54.288 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:23:54.288 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2639263 00:23:54.288 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:54.288 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2639263 /var/tmp/bperf.sock 00:23:54.288 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2639263 ']' 00:23:54.288 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:54.288 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:54.288 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:54.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:54.288 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:54.288 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:54.288 [2024-07-24 19:21:00.222082] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:23:54.288 [2024-07-24 19:21:00.222175] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2639263 ] 00:23:54.288 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:54.288 Zero copy mechanism will not be used. 00:23:54.288 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.288 [2024-07-24 19:21:00.285967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.546 [2024-07-24 19:21:00.406088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.546 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:54.546 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:54.546 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:54.546 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:54.804 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:54.804 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.804 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:54.804 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.804 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:54.804 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:55.433 nvme0n1 00:23:55.433 19:21:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:55.433 19:21:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.433 19:21:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:55.433 19:21:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.433 19:21:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:55.433 19:21:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:55.693 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:55.693 Zero copy mechanism will not be used. 00:23:55.693 Running I/O for 2 seconds... 00:23:55.693 [2024-07-24 19:21:01.479785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.693 [2024-07-24 19:21:01.480174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.693 [2024-07-24 19:21:01.480216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.693 [2024-07-24 19:21:01.487045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.693 [2024-07-24 19:21:01.487426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.693 [2024-07-24 19:21:01.487461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.693 [2024-07-24 19:21:01.494608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.693 [2024-07-24 19:21:01.494969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.693 [2024-07-24 19:21:01.495003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.693 [2024-07-24 19:21:01.502112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.693 [2024-07-24 19:21:01.502469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.693 [2024-07-24 19:21:01.502510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.693 [2024-07-24 19:21:01.508541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.693 [2024-07-24 19:21:01.508899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.693 [2024-07-24 19:21:01.508931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.693 [2024-07-24 19:21:01.515126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.693 [2024-07-24 19:21:01.515509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.693 [2024-07-24 19:21:01.515546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.693 [2024-07-24 19:21:01.521610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.693 [2024-07-24 19:21:01.521968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.693 [2024-07-24 19:21:01.521999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.693 [2024-07-24 19:21:01.527798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.693 [2024-07-24 19:21:01.528156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.693 [2024-07-24 19:21:01.528187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.693 [2024-07-24 19:21:01.535027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.693 [2024-07-24 19:21:01.535382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.693 [2024-07-24 19:21:01.535413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.693 [2024-07-24 19:21:01.542736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.693 [2024-07-24 19:21:01.543098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.693 [2024-07-24 19:21:01.543129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.693 [2024-07-24 19:21:01.549771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.693 [2024-07-24 19:21:01.550129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.693 [2024-07-24 19:21:01.550163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.693 [2024-07-24 19:21:01.556176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.693 [2024-07-24 19:21:01.556551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.693 [2024-07-24 19:21:01.556591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.693 [2024-07-24 19:21:01.562262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.693 [2024-07-24 19:21:01.562638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.693 [2024-07-24 19:21:01.562670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.693 [2024-07-24 19:21:01.568253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.693 [2024-07-24 19:21:01.568616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.693 [2024-07-24 19:21:01.568648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.693 [2024-07-24 19:21:01.574646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.693 [2024-07-24 19:21:01.575005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.693 [2024-07-24 19:21:01.575037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.693 [2024-07-24 19:21:01.581171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.693 [2024-07-24 19:21:01.581552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.693 [2024-07-24 19:21:01.581584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.693 [2024-07-24 19:21:01.587520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.693 [2024-07-24 19:21:01.587880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.693 [2024-07-24 19:21:01.587912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.693 [2024-07-24 19:21:01.593740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.693 [2024-07-24 19:21:01.594099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.693 [2024-07-24 19:21:01.594131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.693 [2024-07-24 19:21:01.599883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.694 [2024-07-24 19:21:01.600237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.694 [2024-07-24 19:21:01.600268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.694 [2024-07-24 19:21:01.606220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.694 [2024-07-24 19:21:01.606590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.694 [2024-07-24 19:21:01.606621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.694 [2024-07-24 19:21:01.612478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.694 [2024-07-24 19:21:01.612861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.694 [2024-07-24 19:21:01.612893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.694 [2024-07-24 19:21:01.618245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.694 [2024-07-24 19:21:01.618613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.694 [2024-07-24 19:21:01.618644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.694 [2024-07-24 19:21:01.624247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.694 [2024-07-24 19:21:01.624611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.694 [2024-07-24 19:21:01.624654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.694 [2024-07-24 19:21:01.630186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.694 [2024-07-24 19:21:01.630552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.694 [2024-07-24 19:21:01.630583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.694 [2024-07-24 19:21:01.636532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.694 [2024-07-24 19:21:01.636889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.694 [2024-07-24 19:21:01.636921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.694 [2024-07-24 19:21:01.642787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.694 [2024-07-24 19:21:01.643144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.694 [2024-07-24 19:21:01.643175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.694 [2024-07-24 19:21:01.648518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.694 [2024-07-24 19:21:01.648881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.694 [2024-07-24 19:21:01.648912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.694 [2024-07-24 19:21:01.655039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.694 [2024-07-24 19:21:01.655398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.694 [2024-07-24 19:21:01.655432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.694 [2024-07-24 19:21:01.660932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.694 [2024-07-24 19:21:01.661288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.694 [2024-07-24 19:21:01.661320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.694 [2024-07-24 19:21:01.666673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.694 [2024-07-24 19:21:01.667029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.694 [2024-07-24 19:21:01.667061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.694 [2024-07-24 19:21:01.672454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.694 [2024-07-24 19:21:01.672819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.694 [2024-07-24 19:21:01.672850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.694 [2024-07-24 19:21:01.678925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.694 [2024-07-24 19:21:01.679269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.694 [2024-07-24 19:21:01.679300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.694 [2024-07-24 19:21:01.684699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.694 [2024-07-24 19:21:01.685057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.694 [2024-07-24 19:21:01.685087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.694 [2024-07-24 19:21:01.690547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.694 [2024-07-24 19:21:01.690904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.694 [2024-07-24 19:21:01.690935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.694 [2024-07-24 19:21:01.696135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.694 [2024-07-24 19:21:01.696492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.694 [2024-07-24 19:21:01.696530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.694 [2024-07-24 19:21:01.702188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.694 [2024-07-24 19:21:01.702562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.694 [2024-07-24 19:21:01.702604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.708049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.708410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.708445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.714534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.714902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.714936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.721200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.721575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.721607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.727203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.727566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.727609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.734151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.734386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.734418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.742104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.742478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.742516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.749714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.750072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.750105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.757197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.757587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.757619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.764952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.765308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.765339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.772619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.772986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.773018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.780285] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.780662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.780694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.787911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.788264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.788295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.795454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.795859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.795889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.802988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.803351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.803383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.810496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.810863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.810894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.818010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.818366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.818396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.825332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.825695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.825727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.833037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.833381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.833412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.840664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.841038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.841070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.847787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.848158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.848189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.854749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.855108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.855139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.862269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.862649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.862681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.869787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.870142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.870173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.954 [2024-07-24 19:21:01.877470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.954 [2024-07-24 19:21:01.877842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.954 [2024-07-24 19:21:01.877874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.955 [2024-07-24 19:21:01.884124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.955 [2024-07-24 19:21:01.884478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.955 [2024-07-24 19:21:01.884515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.955 [2024-07-24 19:21:01.890651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.955 [2024-07-24 19:21:01.891004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.955 [2024-07-24 19:21:01.891038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.955 [2024-07-24 19:21:01.896502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.955 [2024-07-24 19:21:01.896854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.955 [2024-07-24 19:21:01.896885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.955 [2024-07-24 19:21:01.902469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.955 [2024-07-24 19:21:01.902831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.955 [2024-07-24 19:21:01.902861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.955 [2024-07-24 19:21:01.908545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.955 [2024-07-24 19:21:01.908915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.955 [2024-07-24 19:21:01.908945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.955 [2024-07-24 19:21:01.915065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.955 [2024-07-24 19:21:01.915433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.955 [2024-07-24 19:21:01.915473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.955 [2024-07-24 19:21:01.921265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.955 [2024-07-24 19:21:01.921624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.955 [2024-07-24 19:21:01.921656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.955 [2024-07-24 19:21:01.927705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.955 [2024-07-24 19:21:01.928058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.955 [2024-07-24 19:21:01.928088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.955 [2024-07-24 19:21:01.933415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.955 [2024-07-24 19:21:01.933773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.955 [2024-07-24 19:21:01.933804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.955 [2024-07-24 19:21:01.939762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.955 [2024-07-24 19:21:01.940115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.955 [2024-07-24 19:21:01.940148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.955 [2024-07-24 19:21:01.945794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.955 [2024-07-24 19:21:01.946172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.955 [2024-07-24 19:21:01.946202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.955 [2024-07-24 19:21:01.951770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.955 [2024-07-24 19:21:01.952123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.955 [2024-07-24 19:21:01.952154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.955 [2024-07-24 19:21:01.958022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.955 [2024-07-24 19:21:01.958376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.955 [2024-07-24 19:21:01.958407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.955 [2024-07-24 19:21:01.964251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:55.955 [2024-07-24 19:21:01.964621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.955 [2024-07-24 19:21:01.964654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.214 [2024-07-24 19:21:01.970269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.214 [2024-07-24 19:21:01.970677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.214 [2024-07-24 19:21:01.970712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.214 [2024-07-24 19:21:01.976701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:01.977054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:01.977087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:01.983014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:01.983365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:01.983396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:01.989584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:01.989933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:01.989964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:01.996040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:01.996393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:01.996425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.002282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.002640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.002671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.008367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.008726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.008757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.014092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.014445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.014476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.019766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.020125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.020156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.025734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.026104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.026134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.032032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.032386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.032417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.037920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.038276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.038307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.043625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.043960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.043989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.049965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.050316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.050346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.055960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.056311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.056342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.061954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.062312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.062343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.068470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.068832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.068863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.075846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.076199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.076241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.083497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.083851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.083882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.090649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.091004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.091035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.096851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.097201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.097232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.103285] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.103649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.103680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.109541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.109905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.109935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.115854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.116204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.116234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.122110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.122473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.122513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.128755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.129109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.129143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.135001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.135365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.135398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.140936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.141293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.141324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.146745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.147098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.147129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.153049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.153422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.153453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.159610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.159962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.159993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.165552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.165902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.165932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.171230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.171591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.171622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.177349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.177711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.177742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.183724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.184059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.184095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.190651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.215 [2024-07-24 19:21:02.191003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.215 [2024-07-24 19:21:02.191033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.215 [2024-07-24 19:21:02.196787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.216 [2024-07-24 19:21:02.197140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.216 [2024-07-24 19:21:02.197170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.216 [2024-07-24 19:21:02.203668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.216 [2024-07-24 19:21:02.204019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.216 [2024-07-24 19:21:02.204049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.216 [2024-07-24 19:21:02.210231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.216 [2024-07-24 19:21:02.210587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.216 [2024-07-24 19:21:02.210619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.216 [2024-07-24 19:21:02.216073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.216 [2024-07-24 19:21:02.216421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.216 [2024-07-24 19:21:02.216451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.216 [2024-07-24 19:21:02.222227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.216 [2024-07-24 19:21:02.222571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.216 [2024-07-24 19:21:02.222604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.228242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.228605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.228639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.233950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.234303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.234337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.239711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.240070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.240101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.246766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.247117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.247147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.252709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.253059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.253090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.258541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.258899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.258929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.264336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.264695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.264726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.270251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.270616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.270647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.276292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.276671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.276702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.282705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.283063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.283093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.288656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.289012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.289042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.294669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.295021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.295051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.300989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.301323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.301354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.307003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.307354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.307384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.313377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.313734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.313765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.320117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.320471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.320510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.326034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.326389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.326419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.332312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.332679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.332711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.338058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.338415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.338446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.343641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.343991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.344028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.349246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.349602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.349633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.355891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.355996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.356025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.362249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.362609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.362639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.369604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.369963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.369993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.474 [2024-07-24 19:21:02.377432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.474 [2024-07-24 19:21:02.377792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.474 [2024-07-24 19:21:02.377822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.475 [2024-07-24 19:21:02.385526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.475 [2024-07-24 19:21:02.385887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.475 [2024-07-24 19:21:02.385917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.475 [2024-07-24 19:21:02.393581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.475 [2024-07-24 19:21:02.393942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.475 [2024-07-24 19:21:02.393973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.475 [2024-07-24 19:21:02.401135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.475 [2024-07-24 19:21:02.401315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.475 [2024-07-24 19:21:02.401345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.475 [2024-07-24 19:21:02.409259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.475 [2024-07-24 19:21:02.409630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.475 [2024-07-24 19:21:02.409661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.475 [2024-07-24 19:21:02.416919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.475 [2024-07-24 19:21:02.417254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.475 [2024-07-24 19:21:02.417284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.475 [2024-07-24 19:21:02.424656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.475 [2024-07-24 19:21:02.424855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.475 [2024-07-24 19:21:02.424885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.475 [2024-07-24 19:21:02.432549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.475 [2024-07-24 19:21:02.432907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.475 [2024-07-24 19:21:02.432938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.475 [2024-07-24 19:21:02.440108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.475 [2024-07-24 19:21:02.440460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.475 [2024-07-24 19:21:02.440498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.475 [2024-07-24 19:21:02.447457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.475 [2024-07-24 19:21:02.447826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.475 [2024-07-24 19:21:02.447856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.475 [2024-07-24 19:21:02.454874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.475 [2024-07-24 19:21:02.455229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.475 [2024-07-24 19:21:02.455258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.475 [2024-07-24 19:21:02.462300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.475 [2024-07-24 19:21:02.462508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.475 [2024-07-24 19:21:02.462539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.475 [2024-07-24 19:21:02.470066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.475 [2024-07-24 19:21:02.470435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.475 [2024-07-24 19:21:02.470466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.475 [2024-07-24 19:21:02.477612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.475 [2024-07-24 19:21:02.477985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.475 [2024-07-24 19:21:02.478016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.475 [2024-07-24 19:21:02.484145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.475 [2024-07-24 19:21:02.484500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.475 [2024-07-24 19:21:02.484533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.735 [2024-07-24 19:21:02.491054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.735 [2024-07-24 19:21:02.491421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.735 [2024-07-24 19:21:02.491454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.735 [2024-07-24 19:21:02.497208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.735 [2024-07-24 19:21:02.497570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.735 [2024-07-24 19:21:02.497603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.735 [2024-07-24 19:21:02.503440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.735 [2024-07-24 19:21:02.503803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.735 [2024-07-24 19:21:02.503835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.735 [2024-07-24 19:21:02.509839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.735 [2024-07-24 19:21:02.510189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.735 [2024-07-24 19:21:02.510220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.735 [2024-07-24 19:21:02.516162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.735 [2024-07-24 19:21:02.516520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.735 [2024-07-24 19:21:02.516550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.735 [2024-07-24 19:21:02.522289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.735 [2024-07-24 19:21:02.522653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.735 [2024-07-24 19:21:02.522684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.735 [2024-07-24 19:21:02.528311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.735 [2024-07-24 19:21:02.528674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.735 [2024-07-24 19:21:02.528713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.735 [2024-07-24 19:21:02.534324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.735 [2024-07-24 19:21:02.534688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.735 [2024-07-24 19:21:02.534718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.735 [2024-07-24 19:21:02.540167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.735 [2024-07-24 19:21:02.540526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.735 [2024-07-24 19:21:02.540556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.735 [2024-07-24 19:21:02.546291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.735 [2024-07-24 19:21:02.546668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.735 [2024-07-24 19:21:02.546699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.735 [2024-07-24 19:21:02.552141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.735 [2024-07-24 19:21:02.552503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.735 [2024-07-24 19:21:02.552533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.735 [2024-07-24 19:21:02.558360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.735 [2024-07-24 19:21:02.558716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.735 [2024-07-24 19:21:02.558747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.735 [2024-07-24 19:21:02.564423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.735 [2024-07-24 19:21:02.564783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.735 [2024-07-24 19:21:02.564813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.735 [2024-07-24 19:21:02.570875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.735 [2024-07-24 19:21:02.571228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.735 [2024-07-24 19:21:02.571258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.735 [2024-07-24 19:21:02.576999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.735 [2024-07-24 19:21:02.577351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.735 [2024-07-24 19:21:02.577382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.735 [2024-07-24 19:21:02.583176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.735 [2024-07-24 19:21:02.583540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.735 [2024-07-24 19:21:02.583570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.735 [2024-07-24 19:21:02.589463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.735 [2024-07-24 19:21:02.589825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.735 [2024-07-24 19:21:02.589855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.735 [2024-07-24 19:21:02.595667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.596019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.596050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.601934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.602288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.602318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.608210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.608569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.608600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.614568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.614920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.614951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.620995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.621363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.621392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.627008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.627359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.627390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.632946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.633298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.633329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.638502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.638856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.638886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.644074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.644425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.644455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.649828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.650179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.650209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.656668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.657005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.657035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.662531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.662885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.662914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.668384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.668743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.668773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.674355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.674718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.674748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.680141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.680374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.680404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.685507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.685857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.685894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.692053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.692430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.692461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.698589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.698941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.698971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.704884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.705242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.705272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.711115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.711470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.711510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.717899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.718253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.718283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.724068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.724428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.724459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.730597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.730952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.730983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.736952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.737301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.737330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.736 [2024-07-24 19:21:02.743319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.736 [2024-07-24 19:21:02.743675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.736 [2024-07-24 19:21:02.743706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.994 [2024-07-24 19:21:02.749421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.749783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.749817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.755993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.756350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.756382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.762733] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.763088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.763120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.769137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.769494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.769525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.775621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.775975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.776006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.781455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.781814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.781845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.787836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.788192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.788222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.794297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.794657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.794696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.800310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.800670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.800700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.806525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.806882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.806912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.812674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.813028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.813058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.818956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.819310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.819339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.824638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.824990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.825020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.830452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.830818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.830848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.836470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.836827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.836858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.842262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.842618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.842648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.848660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.849022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.849051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.855305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.855669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.855700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.861615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.861971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.862000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.867532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.867882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.867912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.873662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.874014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.874045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.880100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.880451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.880489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.886685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.887036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.887066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.893780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.894131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.894162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.901088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.995 [2024-07-24 19:21:02.901433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.995 [2024-07-24 19:21:02.901465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.995 [2024-07-24 19:21:02.907380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.996 [2024-07-24 19:21:02.907744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.996 [2024-07-24 19:21:02.907775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.996 [2024-07-24 19:21:02.914238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.996 [2024-07-24 19:21:02.914619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.996 [2024-07-24 19:21:02.914650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.996 [2024-07-24 19:21:02.920863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.996 [2024-07-24 19:21:02.921211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.996 [2024-07-24 19:21:02.921242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.996 [2024-07-24 19:21:02.926867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.996 [2024-07-24 19:21:02.927211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.996 [2024-07-24 19:21:02.927241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.996 [2024-07-24 19:21:02.933384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.996 [2024-07-24 19:21:02.933751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.996 [2024-07-24 19:21:02.933782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.996 [2024-07-24 19:21:02.941495] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.996 [2024-07-24 19:21:02.941852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.996 [2024-07-24 19:21:02.941883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.996 [2024-07-24 19:21:02.949263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.996 [2024-07-24 19:21:02.949624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.996 [2024-07-24 19:21:02.949655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.996 [2024-07-24 19:21:02.957578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.996 [2024-07-24 19:21:02.957941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.996 [2024-07-24 19:21:02.957971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.996 [2024-07-24 19:21:02.965910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.996 [2024-07-24 19:21:02.966273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.996 [2024-07-24 19:21:02.966316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.996 [2024-07-24 19:21:02.974274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.996 [2024-07-24 19:21:02.974632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.996 [2024-07-24 19:21:02.974663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.996 [2024-07-24 19:21:02.982712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.996 [2024-07-24 19:21:02.983066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.996 [2024-07-24 19:21:02.983098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.996 [2024-07-24 19:21:02.990725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.996 [2024-07-24 19:21:02.991073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.996 [2024-07-24 19:21:02.991104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.996 [2024-07-24 19:21:02.997179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.996 [2024-07-24 19:21:02.997553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.996 [2024-07-24 19:21:02.997583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.996 [2024-07-24 19:21:03.003583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:56.996 [2024-07-24 19:21:03.003936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.996 [2024-07-24 19:21:03.003967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.255 [2024-07-24 19:21:03.010089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.255 [2024-07-24 19:21:03.010439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.255 [2024-07-24 19:21:03.010475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.255 [2024-07-24 19:21:03.016596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.255 [2024-07-24 19:21:03.016947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.255 [2024-07-24 19:21:03.016980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.022870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.023219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.023251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.029655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.030018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.030050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.036124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.036467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.036506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.042608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.042976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.043007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.049183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.049539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.049570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.055748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.056115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.056146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.062305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.062659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.062690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.068331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.068682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.068713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.074512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.074860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.074891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.080993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.081338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.081370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.088227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.088598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.088629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.095496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.095848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.095879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.101845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.102196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.102226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.108447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.108803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.108834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.114791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.115165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.115195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.121455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.121813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.121844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.127915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.128144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.128175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.133594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.133943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.133974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.139638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.139985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.140026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.146665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.147018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.147050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.152795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.153143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.153174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.159661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.160006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.160037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.166769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.167129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.167160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.173399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.173754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.173785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.179864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.256 [2024-07-24 19:21:03.180208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.256 [2024-07-24 19:21:03.180239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.256 [2024-07-24 19:21:03.186344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.257 [2024-07-24 19:21:03.186699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.257 [2024-07-24 19:21:03.186731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.257 [2024-07-24 19:21:03.193000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.257 [2024-07-24 19:21:03.193355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.257 [2024-07-24 19:21:03.193385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.257 [2024-07-24 19:21:03.199452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.257 [2024-07-24 19:21:03.199818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.257 [2024-07-24 19:21:03.199850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.257 [2024-07-24 19:21:03.206105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.257 [2024-07-24 19:21:03.206466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.257 [2024-07-24 19:21:03.206506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.257 [2024-07-24 19:21:03.212309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.257 [2024-07-24 19:21:03.212663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.257 [2024-07-24 19:21:03.212694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.257 [2024-07-24 19:21:03.218359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.257 [2024-07-24 19:21:03.218712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.257 [2024-07-24 19:21:03.218743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.257 [2024-07-24 19:21:03.224389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.257 [2024-07-24 19:21:03.224746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.257 [2024-07-24 19:21:03.224777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.257 [2024-07-24 19:21:03.231574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.257 [2024-07-24 19:21:03.231922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.257 [2024-07-24 19:21:03.231953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.257 [2024-07-24 19:21:03.237576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.257 [2024-07-24 19:21:03.237933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.257 [2024-07-24 19:21:03.237963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.257 [2024-07-24 19:21:03.243432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.257 [2024-07-24 19:21:03.243787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.257 [2024-07-24 19:21:03.243818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.257 [2024-07-24 19:21:03.249411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.257 [2024-07-24 19:21:03.249762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.257 [2024-07-24 19:21:03.249809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.257 [2024-07-24 19:21:03.256130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.257 [2024-07-24 19:21:03.256476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.257 [2024-07-24 19:21:03.256514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.257 [2024-07-24 19:21:03.262245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.257 [2024-07-24 19:21:03.262606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.257 [2024-07-24 19:21:03.262636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.269757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.270128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.270163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.277959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.278307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.278341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.286577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.286935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.286967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.294363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.294725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.294758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.301700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.302046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.302078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.308617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.308989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.309021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.315145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.315515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.315551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.321716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.322066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.322098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.328079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.328442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.328473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.334329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.334683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.334714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.340318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.340676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.340707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.346771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.347118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.347149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.353440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.353812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.353844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.360007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.360356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.360387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.366257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.366622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.366653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.372687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.373035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.373066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.378593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.378943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.378975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.385203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.385559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.385590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.391788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.392140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.392171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.398499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.398849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.398881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.404739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.405088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.405119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.411250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.411602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.411634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.417718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.418066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.418097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.423799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.424142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.424185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.430110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.430455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.430493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.436754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.437153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.437184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.443282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.443638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.443669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.449874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.450220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.450252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.456180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.456535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.456566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.462778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.463125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.463156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.515 [2024-07-24 19:21:03.469544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ef810) with pdu=0x2000190fef90 00:23:57.515 [2024-07-24 19:21:03.469922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.515 [2024-07-24 19:21:03.469953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.515 00:23:57.515 Latency(us) 00:23:57.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.515 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:57.515 nvme0n1 : 2.00 4746.21 593.28 0.00 0.00 3363.01 2500.08 11068.30 00:23:57.515 =================================================================================================================== 00:23:57.515 Total : 4746.21 593.28 0.00 0.00 3363.01 2500.08 11068.30 00:23:57.515 0 00:23:57.515 19:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:57.515 19:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:57.515 19:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:57.515 | .driver_specific 00:23:57.515 | .nvme_error 00:23:57.515 | .status_code 00:23:57.515 | .command_transient_transport_error' 00:23:57.515 19:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:58.080 19:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 306 > 0 )) 00:23:58.080 19:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2639263 00:23:58.080 19:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2639263 ']' 00:23:58.080 19:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2639263 00:23:58.080 19:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:23:58.080 19:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:58.080 19:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2639263 00:23:58.080 19:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:58.080 19:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:58.080 19:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2639263' 00:23:58.080 killing process with pid 2639263 00:23:58.080 19:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2639263 00:23:58.080 Received shutdown signal, test time was about 2.000000 seconds 00:23:58.080 00:23:58.080 Latency(us) 00:23:58.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.080 =================================================================================================================== 00:23:58.080 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:58.080 19:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2639263 00:23:58.080 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2638105 00:23:58.080 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2638105 ']' 00:23:58.080 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2638105 00:23:58.080 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:23:58.080 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:58.080 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2638105 00:23:58.080 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:58.080 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:58.080 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2638105' 00:23:58.080 killing process with pid 2638105 00:23:58.080 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2638105 00:23:58.080 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2638105 00:23:58.339 00:23:58.339 real 0m15.642s 00:23:58.339 user 0m31.817s 00:23:58.339 sys 0m3.942s 00:23:58.339 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:58.339 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:58.339 ************************************ 00:23:58.339 END TEST nvmf_digest_error 00:23:58.339 ************************************ 00:23:58.339 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:23:58.339 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:23:58.339 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:58.339 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:23:58.339 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:58.339 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:23:58.340 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:58.340 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:58.340 rmmod nvme_tcp 00:23:58.340 rmmod nvme_fabrics 00:23:58.340 rmmod nvme_keyring 00:23:58.599 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:58.599 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:23:58.599 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:23:58.599 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2638105 ']' 00:23:58.599 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2638105 00:23:58.599 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 2638105 ']' 00:23:58.599 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 2638105 00:23:58.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2638105) - No such process 00:23:58.599 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 2638105 is not found' 00:23:58.599 Process with pid 2638105 is not found 00:23:58.599 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:58.599 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:58.599 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:58.599 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:58.599 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:58.599 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.599 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.599 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.501 19:21:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:00.501 00:24:00.501 real 0m35.348s 00:24:00.501 user 1m4.466s 00:24:00.501 sys 0m9.221s 00:24:00.501 19:21:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:00.501 19:21:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:00.501 ************************************ 00:24:00.501 END TEST nvmf_digest 00:24:00.501 ************************************ 00:24:00.501 19:21:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:24:00.501 19:21:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:24:00.501 19:21:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:24:00.501 19:21:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:00.501 19:21:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:00.501 19:21:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:00.501 19:21:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.501 ************************************ 00:24:00.501 START TEST nvmf_bdevperf 00:24:00.501 ************************************ 00:24:00.501 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:00.759 * Looking for test storage... 00:24:00.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:00.759 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:00.759 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:00.759 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.759 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.759 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.759 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.759 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.759 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.759 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.759 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.759 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.759 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.759 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:24:00.759 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:24:00.759 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.759 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.759 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:00.759 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:00.760 19:21:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:24:02.137 Found 0000:08:00.0 (0x8086 - 0x159b) 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:24:02.137 Found 0000:08:00.1 (0x8086 - 0x159b) 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.137 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:24:02.138 Found net devices under 0000:08:00.0: cvl_0_0 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:24:02.138 Found net devices under 0000:08:00.1: cvl_0_1 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:02.138 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:02.396 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:02.396 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:02.396 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:02.396 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:02.396 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:02.396 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:02.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:24:02.397 00:24:02.397 --- 10.0.0.2 ping statistics --- 00:24:02.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.397 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:02.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:24:02.397 00:24:02.397 --- 10.0.0.1 ping statistics --- 00:24:02.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.397 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2641300 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2641300 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2641300 ']' 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:02.397 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:02.397 [2024-07-24 19:21:08.312674] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:24:02.397 [2024-07-24 19:21:08.312773] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.397 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.397 [2024-07-24 19:21:08.378202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:02.656 [2024-07-24 19:21:08.495556] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.656 [2024-07-24 19:21:08.495618] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.656 [2024-07-24 19:21:08.495634] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.656 [2024-07-24 19:21:08.495654] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.656 [2024-07-24 19:21:08.495666] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.656 [2024-07-24 19:21:08.495746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.656 [2024-07-24 19:21:08.499057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:02.656 [2024-07-24 19:21:08.499090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.656 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:02.656 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:24:02.656 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:02.656 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:02.656 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:02.656 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.656 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:02.656 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.656 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:02.656 [2024-07-24 19:21:08.626198] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.656 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.656 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:02.656 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.656 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:02.656 Malloc0 00:24:02.656 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.656 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:02.656 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.656 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:02.914 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.914 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:02.914 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.914 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:02.914 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.914 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:02.914 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.914 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:02.914 [2024-07-24 19:21:08.685300] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.914 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.914 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:02.914 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:02.914 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:02.914 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:02.914 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:02.914 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:02.914 { 00:24:02.914 "params": { 00:24:02.914 "name": "Nvme$subsystem", 00:24:02.914 "trtype": "$TEST_TRANSPORT", 00:24:02.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.914 "adrfam": "ipv4", 00:24:02.914 "trsvcid": "$NVMF_PORT", 00:24:02.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.914 "hdgst": ${hdgst:-false}, 00:24:02.914 "ddgst": ${ddgst:-false} 00:24:02.914 }, 00:24:02.914 "method": "bdev_nvme_attach_controller" 00:24:02.914 } 00:24:02.914 EOF 00:24:02.914 )") 00:24:02.914 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:02.914 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:02.914 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:02.914 19:21:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:02.914 "params": { 00:24:02.914 "name": "Nvme1", 00:24:02.914 "trtype": "tcp", 00:24:02.914 "traddr": "10.0.0.2", 00:24:02.914 "adrfam": "ipv4", 00:24:02.914 "trsvcid": "4420", 00:24:02.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:02.914 "hdgst": false, 00:24:02.914 "ddgst": false 00:24:02.914 }, 00:24:02.914 "method": "bdev_nvme_attach_controller" 00:24:02.914 }' 00:24:02.914 [2024-07-24 19:21:08.735858] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:24:02.914 [2024-07-24 19:21:08.735951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2641454 ] 00:24:02.914 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.914 [2024-07-24 19:21:08.796842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.914 [2024-07-24 19:21:08.917299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.173 Running I/O for 1 seconds... 00:24:04.108 00:24:04.108 Latency(us) 00:24:04.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.108 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:04.108 Verification LBA range: start 0x0 length 0x4000 00:24:04.108 Nvme1n1 : 1.00 7430.52 29.03 0.00 0.00 17127.89 1195.43 16214.09 00:24:04.108 =================================================================================================================== 00:24:04.108 Total : 7430.52 29.03 0.00 0.00 17127.89 1195.43 16214.09 00:24:04.366 19:21:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2641909 00:24:04.366 19:21:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:04.366 19:21:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:04.366 19:21:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:04.366 19:21:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:04.366 19:21:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:04.367 19:21:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:04.367 19:21:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:04.367 { 00:24:04.367 "params": { 00:24:04.367 "name": "Nvme$subsystem", 00:24:04.367 "trtype": "$TEST_TRANSPORT", 00:24:04.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.367 "adrfam": "ipv4", 00:24:04.367 "trsvcid": "$NVMF_PORT", 00:24:04.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.367 "hdgst": ${hdgst:-false}, 00:24:04.367 "ddgst": ${ddgst:-false} 00:24:04.367 }, 00:24:04.367 "method": "bdev_nvme_attach_controller" 00:24:04.367 } 00:24:04.367 EOF 00:24:04.367 )") 00:24:04.367 19:21:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:04.367 19:21:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:04.367 19:21:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:04.367 19:21:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:04.367 "params": { 00:24:04.367 "name": "Nvme1", 00:24:04.367 "trtype": "tcp", 00:24:04.367 "traddr": "10.0.0.2", 00:24:04.367 "adrfam": "ipv4", 00:24:04.367 "trsvcid": "4420", 00:24:04.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:04.367 "hdgst": false, 00:24:04.367 "ddgst": false 00:24:04.367 }, 00:24:04.367 "method": "bdev_nvme_attach_controller" 00:24:04.367 }' 00:24:04.367 [2024-07-24 19:21:10.357082] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:24:04.367 [2024-07-24 19:21:10.357174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2641909 ] 00:24:04.625 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.625 [2024-07-24 19:21:10.419194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.625 [2024-07-24 19:21:10.537849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.883 Running I/O for 15 seconds... 00:24:07.416 19:21:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2641300 00:24:07.416 19:21:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:07.416 [2024-07-24 19:21:13.321331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.416 [2024-07-24 19:21:13.321383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.416 [2024-07-24 19:21:13.321418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.416 [2024-07-24 19:21:13.321436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.416 [2024-07-24 19:21:13.321455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.416 [2024-07-24 19:21:13.321471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.416 [2024-07-24 19:21:13.321496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.416 [2024-07-24 19:21:13.321513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.416 [2024-07-24 19:21:13.321530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.416 [2024-07-24 19:21:13.321545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.416 [2024-07-24 19:21:13.321563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.416 [2024-07-24 19:21:13.321578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.416 [2024-07-24 19:21:13.321597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.416 [2024-07-24 19:21:13.321611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.416 [2024-07-24 19:21:13.321630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.416 [2024-07-24 19:21:13.321644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.416 [2024-07-24 19:21:13.321663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.416 [2024-07-24 19:21:13.321679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.416 [2024-07-24 19:21:13.321696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.416 [2024-07-24 19:21:13.321711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.416 [2024-07-24 19:21:13.321738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.416 [2024-07-24 19:21:13.321754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.416 [2024-07-24 19:21:13.321772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.416 [2024-07-24 19:21:13.321787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.416 [2024-07-24 19:21:13.321804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.416 [2024-07-24 19:21:13.321818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.321835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.321850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.321867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.321882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.321899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.321914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.321931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.321945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.321963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.321977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.321994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.417 [2024-07-24 19:21:13.322106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.322967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.417 [2024-07-24 19:21:13.322983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.417 [2024-07-24 19:21:13.323000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.323978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.323993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.324010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.324025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.324042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.324056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.324073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.324088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.324105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.324120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.324137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.324153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.418 [2024-07-24 19:21:13.324170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.418 [2024-07-24 19:21:13.324188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.324220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.324252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.324282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.324313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.324345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.324375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.324407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.324437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.419 [2024-07-24 19:21:13.324468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.419 [2024-07-24 19:21:13.324510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.324542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.324575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.324612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.324645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.324684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.324716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.324748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.324780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.324812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.324844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.324875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.324907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.324938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.324970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.324987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.325001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.325022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.325036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.325053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.325068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.325085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.325099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.325116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.325130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.325148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.325163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.325181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.325196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.325213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.325228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.325245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.325259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.325276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.325291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.325307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.325323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.419 [2024-07-24 19:21:13.325339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.419 [2024-07-24 19:21:13.325354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.420 [2024-07-24 19:21:13.325370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.420 [2024-07-24 19:21:13.325385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.420 [2024-07-24 19:21:13.325401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.420 [2024-07-24 19:21:13.325420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.420 [2024-07-24 19:21:13.325438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.420 [2024-07-24 19:21:13.325452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.420 [2024-07-24 19:21:13.325469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.420 [2024-07-24 19:21:13.325493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.420 [2024-07-24 19:21:13.325512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.420 [2024-07-24 19:21:13.325527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.420 [2024-07-24 19:21:13.325543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123cbc0 is same with the state(5) to be set 00:24:07.420 [2024-07-24 19:21:13.325561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:07.420 [2024-07-24 19:21:13.325574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:07.420 [2024-07-24 19:21:13.325587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15928 len:8 PRP1 0x0 PRP2 0x0 00:24:07.420 [2024-07-24 19:21:13.325601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.420 [2024-07-24 19:21:13.325667] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x123cbc0 was disconnected and freed. reset controller. 00:24:07.420 [2024-07-24 19:21:13.329819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.420 [2024-07-24 19:21:13.329901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.420 [2024-07-24 19:21:13.330731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.420 [2024-07-24 19:21:13.330784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.420 [2024-07-24 19:21:13.330802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.420 [2024-07-24 19:21:13.331067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.420 [2024-07-24 19:21:13.331336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.420 [2024-07-24 19:21:13.331359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.420 [2024-07-24 19:21:13.331377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.420 [2024-07-24 19:21:13.335430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.420 [2024-07-24 19:21:13.344561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.420 [2024-07-24 19:21:13.345087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.420 [2024-07-24 19:21:13.345137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.420 [2024-07-24 19:21:13.345154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.420 [2024-07-24 19:21:13.345418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.420 [2024-07-24 19:21:13.345705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.420 [2024-07-24 19:21:13.345728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.420 [2024-07-24 19:21:13.345743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.420 [2024-07-24 19:21:13.349802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.420 [2024-07-24 19:21:13.358907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.420 [2024-07-24 19:21:13.359404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.420 [2024-07-24 19:21:13.359433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.420 [2024-07-24 19:21:13.359451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.420 [2024-07-24 19:21:13.359725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.420 [2024-07-24 19:21:13.359992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.420 [2024-07-24 19:21:13.360014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.420 [2024-07-24 19:21:13.360029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.420 [2024-07-24 19:21:13.364108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.420 [2024-07-24 19:21:13.373434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.420 [2024-07-24 19:21:13.374018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.420 [2024-07-24 19:21:13.374072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.420 [2024-07-24 19:21:13.374091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.420 [2024-07-24 19:21:13.374362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.420 [2024-07-24 19:21:13.374643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.420 [2024-07-24 19:21:13.374667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.420 [2024-07-24 19:21:13.374682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.420 [2024-07-24 19:21:13.378739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.420 [2024-07-24 19:21:13.387831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.420 [2024-07-24 19:21:13.388370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.420 [2024-07-24 19:21:13.388422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.420 [2024-07-24 19:21:13.388441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.420 [2024-07-24 19:21:13.388725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.420 [2024-07-24 19:21:13.388993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.420 [2024-07-24 19:21:13.389016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.420 [2024-07-24 19:21:13.389031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.420 [2024-07-24 19:21:13.393084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.420 [2024-07-24 19:21:13.402177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.420 [2024-07-24 19:21:13.402696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.420 [2024-07-24 19:21:13.402736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.420 [2024-07-24 19:21:13.402755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.420 [2024-07-24 19:21:13.403026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.420 [2024-07-24 19:21:13.403301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.420 [2024-07-24 19:21:13.403323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.420 [2024-07-24 19:21:13.403338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.420 [2024-07-24 19:21:13.407401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.420 [2024-07-24 19:21:13.416714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.420 [2024-07-24 19:21:13.417303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.420 [2024-07-24 19:21:13.417344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.420 [2024-07-24 19:21:13.417363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.420 [2024-07-24 19:21:13.417646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.420 [2024-07-24 19:21:13.417915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.420 [2024-07-24 19:21:13.417938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.420 [2024-07-24 19:21:13.417953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.420 [2024-07-24 19:21:13.422023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.679 [2024-07-24 19:21:13.431430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.679 [2024-07-24 19:21:13.432016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.679 [2024-07-24 19:21:13.432058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.679 [2024-07-24 19:21:13.432077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.679 [2024-07-24 19:21:13.432348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.679 [2024-07-24 19:21:13.432630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.679 [2024-07-24 19:21:13.432653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.679 [2024-07-24 19:21:13.432669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.679 [2024-07-24 19:21:13.436749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.679 [2024-07-24 19:21:13.445932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.679 [2024-07-24 19:21:13.446520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.679 [2024-07-24 19:21:13.446561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.679 [2024-07-24 19:21:13.446586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.679 [2024-07-24 19:21:13.446857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.679 [2024-07-24 19:21:13.447125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.679 [2024-07-24 19:21:13.447147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.679 [2024-07-24 19:21:13.447162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.679 [2024-07-24 19:21:13.451224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.679 [2024-07-24 19:21:13.460329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.679 [2024-07-24 19:21:13.460908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.679 [2024-07-24 19:21:13.460949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.679 [2024-07-24 19:21:13.460969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.679 [2024-07-24 19:21:13.461240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.679 [2024-07-24 19:21:13.461524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.679 [2024-07-24 19:21:13.461547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.679 [2024-07-24 19:21:13.461562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.679 [2024-07-24 19:21:13.465637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.679 [2024-07-24 19:21:13.474712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.679 [2024-07-24 19:21:13.475219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.679 [2024-07-24 19:21:13.475260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.679 [2024-07-24 19:21:13.475278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.679 [2024-07-24 19:21:13.475563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.679 [2024-07-24 19:21:13.475832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.679 [2024-07-24 19:21:13.475854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.679 [2024-07-24 19:21:13.475870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.679 [2024-07-24 19:21:13.479923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.679 [2024-07-24 19:21:13.489283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.679 [2024-07-24 19:21:13.489806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.679 [2024-07-24 19:21:13.489855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.679 [2024-07-24 19:21:13.489873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.679 [2024-07-24 19:21:13.490138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.679 [2024-07-24 19:21:13.490404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.679 [2024-07-24 19:21:13.490438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.679 [2024-07-24 19:21:13.490454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.679 [2024-07-24 19:21:13.494537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.679 [2024-07-24 19:21:13.503679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.679 [2024-07-24 19:21:13.504138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.679 [2024-07-24 19:21:13.504168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.679 [2024-07-24 19:21:13.504186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.679 [2024-07-24 19:21:13.504450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.679 [2024-07-24 19:21:13.504726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.679 [2024-07-24 19:21:13.504748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.679 [2024-07-24 19:21:13.504764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.679 [2024-07-24 19:21:13.508802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.679 [2024-07-24 19:21:13.518128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.679 [2024-07-24 19:21:13.518706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.679 [2024-07-24 19:21:13.518748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.679 [2024-07-24 19:21:13.518767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.679 [2024-07-24 19:21:13.519038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.679 [2024-07-24 19:21:13.519312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.679 [2024-07-24 19:21:13.519334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.679 [2024-07-24 19:21:13.519349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.679 [2024-07-24 19:21:13.523400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.679 [2024-07-24 19:21:13.532495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.679 [2024-07-24 19:21:13.533014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.679 [2024-07-24 19:21:13.533056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.679 [2024-07-24 19:21:13.533075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.679 [2024-07-24 19:21:13.533345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.679 [2024-07-24 19:21:13.533627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.679 [2024-07-24 19:21:13.533650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.679 [2024-07-24 19:21:13.533666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.679 [2024-07-24 19:21:13.537727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.679 [2024-07-24 19:21:13.547049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.679 [2024-07-24 19:21:13.547573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.680 [2024-07-24 19:21:13.547615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.680 [2024-07-24 19:21:13.547634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.680 [2024-07-24 19:21:13.547904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.680 [2024-07-24 19:21:13.548172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.680 [2024-07-24 19:21:13.548195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.680 [2024-07-24 19:21:13.548210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.680 [2024-07-24 19:21:13.552282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.680 [2024-07-24 19:21:13.561401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.680 [2024-07-24 19:21:13.561846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.680 [2024-07-24 19:21:13.561887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.680 [2024-07-24 19:21:13.561906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.680 [2024-07-24 19:21:13.562176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.680 [2024-07-24 19:21:13.562444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.680 [2024-07-24 19:21:13.562466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.680 [2024-07-24 19:21:13.562494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.680 [2024-07-24 19:21:13.566545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.680 [2024-07-24 19:21:13.575825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.680 [2024-07-24 19:21:13.576439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.680 [2024-07-24 19:21:13.576491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.680 [2024-07-24 19:21:13.576513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.680 [2024-07-24 19:21:13.576784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.680 [2024-07-24 19:21:13.577059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.680 [2024-07-24 19:21:13.577082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.680 [2024-07-24 19:21:13.577098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.680 [2024-07-24 19:21:13.581222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.680 [2024-07-24 19:21:13.590373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.680 [2024-07-24 19:21:13.590932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.680 [2024-07-24 19:21:13.590974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.680 [2024-07-24 19:21:13.590993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.680 [2024-07-24 19:21:13.591276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.680 [2024-07-24 19:21:13.591554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.680 [2024-07-24 19:21:13.591577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.680 [2024-07-24 19:21:13.591593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.680 [2024-07-24 19:21:13.595674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.680 [2024-07-24 19:21:13.604747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.680 [2024-07-24 19:21:13.605231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.680 [2024-07-24 19:21:13.605281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.680 [2024-07-24 19:21:13.605299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.680 [2024-07-24 19:21:13.605577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.680 [2024-07-24 19:21:13.605845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.680 [2024-07-24 19:21:13.605867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.680 [2024-07-24 19:21:13.605882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.680 [2024-07-24 19:21:13.609925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.680 [2024-07-24 19:21:13.619267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.680 [2024-07-24 19:21:13.619799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.680 [2024-07-24 19:21:13.619841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.680 [2024-07-24 19:21:13.619860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.680 [2024-07-24 19:21:13.620130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.680 [2024-07-24 19:21:13.620398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.680 [2024-07-24 19:21:13.620421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.680 [2024-07-24 19:21:13.620436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.680 [2024-07-24 19:21:13.624485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.680 [2024-07-24 19:21:13.633828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.680 [2024-07-24 19:21:13.634403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.680 [2024-07-24 19:21:13.634458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.680 [2024-07-24 19:21:13.634477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.680 [2024-07-24 19:21:13.634762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.680 [2024-07-24 19:21:13.635029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.680 [2024-07-24 19:21:13.635051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.680 [2024-07-24 19:21:13.635073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.680 [2024-07-24 19:21:13.639128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.680 [2024-07-24 19:21:13.648238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.680 [2024-07-24 19:21:13.648767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.680 [2024-07-24 19:21:13.648807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.680 [2024-07-24 19:21:13.648826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.680 [2024-07-24 19:21:13.649097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.680 [2024-07-24 19:21:13.649365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.680 [2024-07-24 19:21:13.649387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.680 [2024-07-24 19:21:13.649402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.680 [2024-07-24 19:21:13.653470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.680 [2024-07-24 19:21:13.662798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.680 [2024-07-24 19:21:13.663336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.680 [2024-07-24 19:21:13.663377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.680 [2024-07-24 19:21:13.663395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.680 [2024-07-24 19:21:13.663678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.680 [2024-07-24 19:21:13.663948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.680 [2024-07-24 19:21:13.663970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.680 [2024-07-24 19:21:13.663985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.680 [2024-07-24 19:21:13.668029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.680 [2024-07-24 19:21:13.677362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.680 [2024-07-24 19:21:13.677960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.680 [2024-07-24 19:21:13.678002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.680 [2024-07-24 19:21:13.678021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.680 [2024-07-24 19:21:13.678291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.680 [2024-07-24 19:21:13.678572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.681 [2024-07-24 19:21:13.678596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.681 [2024-07-24 19:21:13.678611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.681 [2024-07-24 19:21:13.682690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.939 [2024-07-24 19:21:13.691951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.939 [2024-07-24 19:21:13.692499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.939 [2024-07-24 19:21:13.692569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.939 [2024-07-24 19:21:13.692608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.939 [2024-07-24 19:21:13.692880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.939 [2024-07-24 19:21:13.693149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.939 [2024-07-24 19:21:13.693171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.939 [2024-07-24 19:21:13.693187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.939 [2024-07-24 19:21:13.697342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.939 [2024-07-24 19:21:13.706469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.939 [2024-07-24 19:21:13.707023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.939 [2024-07-24 19:21:13.707075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.939 [2024-07-24 19:21:13.707093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.939 [2024-07-24 19:21:13.707357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.939 [2024-07-24 19:21:13.707635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.939 [2024-07-24 19:21:13.707658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.939 [2024-07-24 19:21:13.707673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.939 [2024-07-24 19:21:13.711741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.939 [2024-07-24 19:21:13.720842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.939 [2024-07-24 19:21:13.721407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.939 [2024-07-24 19:21:13.721448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.939 [2024-07-24 19:21:13.721467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.939 [2024-07-24 19:21:13.721749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.939 [2024-07-24 19:21:13.722018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.939 [2024-07-24 19:21:13.722041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.939 [2024-07-24 19:21:13.722056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.939 [2024-07-24 19:21:13.726132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.939 [2024-07-24 19:21:13.735281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.939 [2024-07-24 19:21:13.735854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.939 [2024-07-24 19:21:13.735896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.939 [2024-07-24 19:21:13.735915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.939 [2024-07-24 19:21:13.736186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.939 [2024-07-24 19:21:13.736461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.939 [2024-07-24 19:21:13.736496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.939 [2024-07-24 19:21:13.736513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.939 [2024-07-24 19:21:13.740597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.939 [2024-07-24 19:21:13.749741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.939 [2024-07-24 19:21:13.750260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.939 [2024-07-24 19:21:13.750300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.939 [2024-07-24 19:21:13.750319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.939 [2024-07-24 19:21:13.750605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.939 [2024-07-24 19:21:13.750874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.939 [2024-07-24 19:21:13.750896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.939 [2024-07-24 19:21:13.750912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.939 [2024-07-24 19:21:13.754992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.939 [2024-07-24 19:21:13.764130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.939 [2024-07-24 19:21:13.764637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.939 [2024-07-24 19:21:13.764668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.939 [2024-07-24 19:21:13.764685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.939 [2024-07-24 19:21:13.764949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.939 [2024-07-24 19:21:13.765216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.939 [2024-07-24 19:21:13.765238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.939 [2024-07-24 19:21:13.765253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.939 [2024-07-24 19:21:13.769336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.939 [2024-07-24 19:21:13.778708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.939 [2024-07-24 19:21:13.779233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.939 [2024-07-24 19:21:13.779284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.939 [2024-07-24 19:21:13.779301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.939 [2024-07-24 19:21:13.779577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.939 [2024-07-24 19:21:13.779844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.939 [2024-07-24 19:21:13.779866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.939 [2024-07-24 19:21:13.779881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.939 [2024-07-24 19:21:13.783962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.939 [2024-07-24 19:21:13.793060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.939 [2024-07-24 19:21:13.793507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.939 [2024-07-24 19:21:13.793547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.939 [2024-07-24 19:21:13.793566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.939 [2024-07-24 19:21:13.793836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.939 [2024-07-24 19:21:13.794105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.940 [2024-07-24 19:21:13.794127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.940 [2024-07-24 19:21:13.794142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.940 [2024-07-24 19:21:13.798199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.940 [2024-07-24 19:21:13.807567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.940 [2024-07-24 19:21:13.808155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.940 [2024-07-24 19:21:13.808195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.940 [2024-07-24 19:21:13.808214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.940 [2024-07-24 19:21:13.808499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.940 [2024-07-24 19:21:13.808768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.940 [2024-07-24 19:21:13.808791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.940 [2024-07-24 19:21:13.808806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.940 [2024-07-24 19:21:13.812895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.940 [2024-07-24 19:21:13.822055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.940 [2024-07-24 19:21:13.822565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.940 [2024-07-24 19:21:13.822607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.940 [2024-07-24 19:21:13.822625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.940 [2024-07-24 19:21:13.822896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.940 [2024-07-24 19:21:13.823164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.940 [2024-07-24 19:21:13.823186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.940 [2024-07-24 19:21:13.823202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.940 [2024-07-24 19:21:13.827303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.940 [2024-07-24 19:21:13.836519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.940 [2024-07-24 19:21:13.837114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.940 [2024-07-24 19:21:13.837161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.940 [2024-07-24 19:21:13.837181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.940 [2024-07-24 19:21:13.837456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.940 [2024-07-24 19:21:13.837742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.940 [2024-07-24 19:21:13.837766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.940 [2024-07-24 19:21:13.837782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.940 [2024-07-24 19:21:13.841882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.940 [2024-07-24 19:21:13.851110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.940 [2024-07-24 19:21:13.851682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.940 [2024-07-24 19:21:13.851724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.940 [2024-07-24 19:21:13.851743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.940 [2024-07-24 19:21:13.852014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.940 [2024-07-24 19:21:13.852282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.940 [2024-07-24 19:21:13.852304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.940 [2024-07-24 19:21:13.852320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.940 [2024-07-24 19:21:13.856389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.940 [2024-07-24 19:21:13.865478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.940 [2024-07-24 19:21:13.866026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.940 [2024-07-24 19:21:13.866080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.940 [2024-07-24 19:21:13.866099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.940 [2024-07-24 19:21:13.866369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.940 [2024-07-24 19:21:13.866649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.940 [2024-07-24 19:21:13.866673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.940 [2024-07-24 19:21:13.866688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.940 [2024-07-24 19:21:13.870795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.940 [2024-07-24 19:21:13.879964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.940 [2024-07-24 19:21:13.880549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.940 [2024-07-24 19:21:13.880590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.940 [2024-07-24 19:21:13.880609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.940 [2024-07-24 19:21:13.880886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.940 [2024-07-24 19:21:13.881168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.940 [2024-07-24 19:21:13.881191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.940 [2024-07-24 19:21:13.881207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.940 [2024-07-24 19:21:13.885312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.940 [2024-07-24 19:21:13.894442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.940 [2024-07-24 19:21:13.895020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.940 [2024-07-24 19:21:13.895074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.940 [2024-07-24 19:21:13.895094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.940 [2024-07-24 19:21:13.895364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.940 [2024-07-24 19:21:13.895645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.940 [2024-07-24 19:21:13.895668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.940 [2024-07-24 19:21:13.895684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.940 [2024-07-24 19:21:13.899774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.940 [2024-07-24 19:21:13.908905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.940 [2024-07-24 19:21:13.909519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.940 [2024-07-24 19:21:13.909560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.940 [2024-07-24 19:21:13.909579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.940 [2024-07-24 19:21:13.909850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.940 [2024-07-24 19:21:13.910118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.940 [2024-07-24 19:21:13.910140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.940 [2024-07-24 19:21:13.910155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.940 [2024-07-24 19:21:13.914204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.940 [2024-07-24 19:21:13.923276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.940 [2024-07-24 19:21:13.923802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.940 [2024-07-24 19:21:13.923849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.940 [2024-07-24 19:21:13.923867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.940 [2024-07-24 19:21:13.924131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.940 [2024-07-24 19:21:13.924398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.940 [2024-07-24 19:21:13.924420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.940 [2024-07-24 19:21:13.924435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.940 [2024-07-24 19:21:13.928520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.940 [2024-07-24 19:21:13.937650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.940 [2024-07-24 19:21:13.938146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.940 [2024-07-24 19:21:13.938186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:07.941 [2024-07-24 19:21:13.938205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:07.941 [2024-07-24 19:21:13.938476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:07.941 [2024-07-24 19:21:13.938758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.941 [2024-07-24 19:21:13.938780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.941 [2024-07-24 19:21:13.938796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.941 [2024-07-24 19:21:13.942861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.201 [2024-07-24 19:21:13.952205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.201 [2024-07-24 19:21:13.952791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.201 [2024-07-24 19:21:13.952847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.201 [2024-07-24 19:21:13.952867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.201 [2024-07-24 19:21:13.953143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.201 [2024-07-24 19:21:13.953412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.201 [2024-07-24 19:21:13.953435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.201 [2024-07-24 19:21:13.953450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.201 [2024-07-24 19:21:13.957659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.201 [2024-07-24 19:21:13.966830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.201 [2024-07-24 19:21:13.967367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.201 [2024-07-24 19:21:13.967408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.201 [2024-07-24 19:21:13.967428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.201 [2024-07-24 19:21:13.967711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.201 [2024-07-24 19:21:13.967980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.201 [2024-07-24 19:21:13.968002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.201 [2024-07-24 19:21:13.968017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.201 [2024-07-24 19:21:13.972116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.201 [2024-07-24 19:21:13.981242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.201 [2024-07-24 19:21:13.981814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.201 [2024-07-24 19:21:13.981869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.201 [2024-07-24 19:21:13.981893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.201 [2024-07-24 19:21:13.982165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.201 [2024-07-24 19:21:13.982434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.201 [2024-07-24 19:21:13.982456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.201 [2024-07-24 19:21:13.982471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.201 [2024-07-24 19:21:13.986556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.201 [2024-07-24 19:21:13.995645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.201 [2024-07-24 19:21:13.996248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.201 [2024-07-24 19:21:13.996288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.201 [2024-07-24 19:21:13.996307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.201 [2024-07-24 19:21:13.996599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.201 [2024-07-24 19:21:13.996874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.201 [2024-07-24 19:21:13.996897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.201 [2024-07-24 19:21:13.996912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.201 [2024-07-24 19:21:14.000992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.201 [2024-07-24 19:21:14.010127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.201 [2024-07-24 19:21:14.010651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.201 [2024-07-24 19:21:14.010704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.201 [2024-07-24 19:21:14.010723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.201 [2024-07-24 19:21:14.010994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.201 [2024-07-24 19:21:14.011262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.201 [2024-07-24 19:21:14.011284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.202 [2024-07-24 19:21:14.011300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.202 [2024-07-24 19:21:14.015384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.202 [2024-07-24 19:21:14.024531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.202 [2024-07-24 19:21:14.025133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.202 [2024-07-24 19:21:14.025175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.202 [2024-07-24 19:21:14.025194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.202 [2024-07-24 19:21:14.025465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.202 [2024-07-24 19:21:14.025747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.202 [2024-07-24 19:21:14.025776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.202 [2024-07-24 19:21:14.025792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.202 [2024-07-24 19:21:14.029885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.202 [2024-07-24 19:21:14.039027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.202 [2024-07-24 19:21:14.039607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.202 [2024-07-24 19:21:14.039680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.202 [2024-07-24 19:21:14.039699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.202 [2024-07-24 19:21:14.039969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.202 [2024-07-24 19:21:14.040244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.202 [2024-07-24 19:21:14.040266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.202 [2024-07-24 19:21:14.040282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.202 [2024-07-24 19:21:14.044340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.202 [2024-07-24 19:21:14.053455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.202 [2024-07-24 19:21:14.053994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.202 [2024-07-24 19:21:14.054046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.202 [2024-07-24 19:21:14.054064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.202 [2024-07-24 19:21:14.054328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.202 [2024-07-24 19:21:14.054607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.202 [2024-07-24 19:21:14.054630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.202 [2024-07-24 19:21:14.054645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.202 [2024-07-24 19:21:14.058705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.202 [2024-07-24 19:21:14.067850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.202 [2024-07-24 19:21:14.068326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.202 [2024-07-24 19:21:14.068373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.202 [2024-07-24 19:21:14.068390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.202 [2024-07-24 19:21:14.068665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.202 [2024-07-24 19:21:14.068933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.202 [2024-07-24 19:21:14.068954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.202 [2024-07-24 19:21:14.068970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.202 [2024-07-24 19:21:14.073064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.202 [2024-07-24 19:21:14.082186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.202 [2024-07-24 19:21:14.082704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.202 [2024-07-24 19:21:14.082756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.202 [2024-07-24 19:21:14.082774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.202 [2024-07-24 19:21:14.083038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.202 [2024-07-24 19:21:14.083304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.202 [2024-07-24 19:21:14.083326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.202 [2024-07-24 19:21:14.083341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.202 [2024-07-24 19:21:14.087439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.202 [2024-07-24 19:21:14.096658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.202 [2024-07-24 19:21:14.097187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.202 [2024-07-24 19:21:14.097228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.202 [2024-07-24 19:21:14.097247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.202 [2024-07-24 19:21:14.097530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.202 [2024-07-24 19:21:14.097799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.202 [2024-07-24 19:21:14.097821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.202 [2024-07-24 19:21:14.097836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.202 [2024-07-24 19:21:14.101950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.202 [2024-07-24 19:21:14.111105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.202 [2024-07-24 19:21:14.111635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.202 [2024-07-24 19:21:14.111677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.202 [2024-07-24 19:21:14.111695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.202 [2024-07-24 19:21:14.111966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.202 [2024-07-24 19:21:14.112235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.202 [2024-07-24 19:21:14.112257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.202 [2024-07-24 19:21:14.112272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.202 [2024-07-24 19:21:14.116331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.202 [2024-07-24 19:21:14.125712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.202 [2024-07-24 19:21:14.126244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.202 [2024-07-24 19:21:14.126295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.202 [2024-07-24 19:21:14.126312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.202 [2024-07-24 19:21:14.126599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.202 [2024-07-24 19:21:14.126867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.202 [2024-07-24 19:21:14.126889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.202 [2024-07-24 19:21:14.126904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.202 [2024-07-24 19:21:14.131047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.202 [2024-07-24 19:21:14.140179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.202 [2024-07-24 19:21:14.140776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.202 [2024-07-24 19:21:14.140817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.202 [2024-07-24 19:21:14.140836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.202 [2024-07-24 19:21:14.141107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.202 [2024-07-24 19:21:14.141375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.202 [2024-07-24 19:21:14.141397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.202 [2024-07-24 19:21:14.141412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.202 [2024-07-24 19:21:14.145503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.202 [2024-07-24 19:21:14.154643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.202 [2024-07-24 19:21:14.155237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.202 [2024-07-24 19:21:14.155278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.202 [2024-07-24 19:21:14.155297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.203 [2024-07-24 19:21:14.155581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.203 [2024-07-24 19:21:14.155850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.203 [2024-07-24 19:21:14.155872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.203 [2024-07-24 19:21:14.155887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.203 [2024-07-24 19:21:14.159981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.203 [2024-07-24 19:21:14.169129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.203 [2024-07-24 19:21:14.169728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.203 [2024-07-24 19:21:14.169769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.203 [2024-07-24 19:21:14.169788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.203 [2024-07-24 19:21:14.170059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.203 [2024-07-24 19:21:14.170327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.203 [2024-07-24 19:21:14.170349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.203 [2024-07-24 19:21:14.170378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.203 [2024-07-24 19:21:14.174448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.203 [2024-07-24 19:21:14.183553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.203 [2024-07-24 19:21:14.184084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.203 [2024-07-24 19:21:14.184139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.203 [2024-07-24 19:21:14.184157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.203 [2024-07-24 19:21:14.184421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.203 [2024-07-24 19:21:14.184698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.203 [2024-07-24 19:21:14.184722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.203 [2024-07-24 19:21:14.184737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.203 [2024-07-24 19:21:14.188797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.203 [2024-07-24 19:21:14.197915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.203 [2024-07-24 19:21:14.198415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.203 [2024-07-24 19:21:14.198455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.203 [2024-07-24 19:21:14.198474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.203 [2024-07-24 19:21:14.198758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.203 [2024-07-24 19:21:14.199034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.203 [2024-07-24 19:21:14.199056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.203 [2024-07-24 19:21:14.199072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.203 [2024-07-24 19:21:14.203148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.203 [2024-07-24 19:21:14.212404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.203 [2024-07-24 19:21:14.212961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.203 [2024-07-24 19:21:14.213026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.203 [2024-07-24 19:21:14.213047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.462 [2024-07-24 19:21:14.213318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.462 [2024-07-24 19:21:14.213628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.462 [2024-07-24 19:21:14.213662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.462 [2024-07-24 19:21:14.213683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.462 [2024-07-24 19:21:14.217786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.462 [2024-07-24 19:21:14.227002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.462 [2024-07-24 19:21:14.227600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.462 [2024-07-24 19:21:14.227648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.462 [2024-07-24 19:21:14.227667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.462 [2024-07-24 19:21:14.227943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.462 [2024-07-24 19:21:14.228212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.462 [2024-07-24 19:21:14.228234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.462 [2024-07-24 19:21:14.228249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.462 [2024-07-24 19:21:14.232339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.462 [2024-07-24 19:21:14.241415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.462 [2024-07-24 19:21:14.241987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.462 [2024-07-24 19:21:14.242029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.462 [2024-07-24 19:21:14.242048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.462 [2024-07-24 19:21:14.242318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.462 [2024-07-24 19:21:14.242604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.462 [2024-07-24 19:21:14.242628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.462 [2024-07-24 19:21:14.242643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.463 [2024-07-24 19:21:14.246680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.463 [2024-07-24 19:21:14.255975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.463 [2024-07-24 19:21:14.256494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.463 [2024-07-24 19:21:14.256553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.463 [2024-07-24 19:21:14.256571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.463 [2024-07-24 19:21:14.256836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.463 [2024-07-24 19:21:14.257103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.463 [2024-07-24 19:21:14.257126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.463 [2024-07-24 19:21:14.257141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.463 [2024-07-24 19:21:14.261183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.463 [2024-07-24 19:21:14.270496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.463 [2024-07-24 19:21:14.270993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.463 [2024-07-24 19:21:14.271034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.463 [2024-07-24 19:21:14.271053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.463 [2024-07-24 19:21:14.271325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.463 [2024-07-24 19:21:14.271611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.463 [2024-07-24 19:21:14.271635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.463 [2024-07-24 19:21:14.271652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.463 [2024-07-24 19:21:14.275697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.463 [2024-07-24 19:21:14.285028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.463 [2024-07-24 19:21:14.285569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.463 [2024-07-24 19:21:14.285611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.463 [2024-07-24 19:21:14.285630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.463 [2024-07-24 19:21:14.285901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.463 [2024-07-24 19:21:14.286170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.463 [2024-07-24 19:21:14.286193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.463 [2024-07-24 19:21:14.286209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.463 [2024-07-24 19:21:14.290253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.463 [2024-07-24 19:21:14.299573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.463 [2024-07-24 19:21:14.300075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.463 [2024-07-24 19:21:14.300121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.463 [2024-07-24 19:21:14.300139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.463 [2024-07-24 19:21:14.300404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.463 [2024-07-24 19:21:14.300680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.463 [2024-07-24 19:21:14.300703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.463 [2024-07-24 19:21:14.300718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.463 [2024-07-24 19:21:14.304782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.463 [2024-07-24 19:21:14.314097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.463 [2024-07-24 19:21:14.314518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.463 [2024-07-24 19:21:14.314556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.463 [2024-07-24 19:21:14.314573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.463 [2024-07-24 19:21:14.314837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.463 [2024-07-24 19:21:14.315110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.463 [2024-07-24 19:21:14.315133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.463 [2024-07-24 19:21:14.315148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.463 [2024-07-24 19:21:14.319210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.463 [2024-07-24 19:21:14.328538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.463 [2024-07-24 19:21:14.328936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.463 [2024-07-24 19:21:14.328979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.463 [2024-07-24 19:21:14.328997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.463 [2024-07-24 19:21:14.329267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.463 [2024-07-24 19:21:14.329545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.463 [2024-07-24 19:21:14.329569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.463 [2024-07-24 19:21:14.329585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.463 [2024-07-24 19:21:14.333656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.463 [2024-07-24 19:21:14.343052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.463 [2024-07-24 19:21:14.343508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.463 [2024-07-24 19:21:14.343564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.463 [2024-07-24 19:21:14.343584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.463 [2024-07-24 19:21:14.343855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.463 [2024-07-24 19:21:14.344123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.463 [2024-07-24 19:21:14.344146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.463 [2024-07-24 19:21:14.344162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.463 [2024-07-24 19:21:14.348213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.463 [2024-07-24 19:21:14.357524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.463 [2024-07-24 19:21:14.358040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.463 [2024-07-24 19:21:14.358082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.463 [2024-07-24 19:21:14.358101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.463 [2024-07-24 19:21:14.358372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.463 [2024-07-24 19:21:14.358652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.463 [2024-07-24 19:21:14.358675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.463 [2024-07-24 19:21:14.358691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.463 [2024-07-24 19:21:14.362749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.463 [2024-07-24 19:21:14.372056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.463 [2024-07-24 19:21:14.372667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.463 [2024-07-24 19:21:14.372709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.463 [2024-07-24 19:21:14.372734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.463 [2024-07-24 19:21:14.373005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.463 [2024-07-24 19:21:14.373273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.463 [2024-07-24 19:21:14.373295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.463 [2024-07-24 19:21:14.373311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.463 [2024-07-24 19:21:14.377363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.463 [2024-07-24 19:21:14.386525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.463 [2024-07-24 19:21:14.387075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.463 [2024-07-24 19:21:14.387115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.463 [2024-07-24 19:21:14.387134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.463 [2024-07-24 19:21:14.387404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.464 [2024-07-24 19:21:14.387686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.464 [2024-07-24 19:21:14.387710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.464 [2024-07-24 19:21:14.387726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.464 [2024-07-24 19:21:14.391782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.464 [2024-07-24 19:21:14.401102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.464 [2024-07-24 19:21:14.401689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.464 [2024-07-24 19:21:14.401737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.464 [2024-07-24 19:21:14.401758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.464 [2024-07-24 19:21:14.402029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.464 [2024-07-24 19:21:14.402297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.464 [2024-07-24 19:21:14.402320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.464 [2024-07-24 19:21:14.402335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.464 [2024-07-24 19:21:14.406402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.464 [2024-07-24 19:21:14.415578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.464 [2024-07-24 19:21:14.416100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.464 [2024-07-24 19:21:14.416157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.464 [2024-07-24 19:21:14.416176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.464 [2024-07-24 19:21:14.416446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.464 [2024-07-24 19:21:14.416728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.464 [2024-07-24 19:21:14.416757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.464 [2024-07-24 19:21:14.416773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.464 [2024-07-24 19:21:14.420867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.464 [2024-07-24 19:21:14.429958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.464 [2024-07-24 19:21:14.430500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.464 [2024-07-24 19:21:14.430542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.464 [2024-07-24 19:21:14.430561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.464 [2024-07-24 19:21:14.430832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.464 [2024-07-24 19:21:14.431100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.464 [2024-07-24 19:21:14.431122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.464 [2024-07-24 19:21:14.431138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.464 [2024-07-24 19:21:14.435182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.464 [2024-07-24 19:21:14.444534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.464 [2024-07-24 19:21:14.445058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.464 [2024-07-24 19:21:14.445107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.464 [2024-07-24 19:21:14.445125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.464 [2024-07-24 19:21:14.445395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.464 [2024-07-24 19:21:14.445672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.464 [2024-07-24 19:21:14.445695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.464 [2024-07-24 19:21:14.445710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.464 [2024-07-24 19:21:14.449778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.464 [2024-07-24 19:21:14.458934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.464 [2024-07-24 19:21:14.459445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.464 [2024-07-24 19:21:14.459503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.464 [2024-07-24 19:21:14.459521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.464 [2024-07-24 19:21:14.459785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.464 [2024-07-24 19:21:14.460051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.464 [2024-07-24 19:21:14.460072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.464 [2024-07-24 19:21:14.460087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.464 [2024-07-24 19:21:14.464164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.464 [2024-07-24 19:21:14.473440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.464 [2024-07-24 19:21:14.474092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.464 [2024-07-24 19:21:14.474134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.464 [2024-07-24 19:21:14.474166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.464 [2024-07-24 19:21:14.474440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.464 [2024-07-24 19:21:14.474721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.464 [2024-07-24 19:21:14.474744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.464 [2024-07-24 19:21:14.474760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.723 [2024-07-24 19:21:14.478928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.723 [2024-07-24 19:21:14.487949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.723 [2024-07-24 19:21:14.488520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.723 [2024-07-24 19:21:14.488562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.723 [2024-07-24 19:21:14.488581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.723 [2024-07-24 19:21:14.488851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.723 [2024-07-24 19:21:14.489120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.723 [2024-07-24 19:21:14.489142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.723 [2024-07-24 19:21:14.489158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.723 [2024-07-24 19:21:14.493224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.723 [2024-07-24 19:21:14.502305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.723 [2024-07-24 19:21:14.502878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.723 [2024-07-24 19:21:14.502932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.723 [2024-07-24 19:21:14.502951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.723 [2024-07-24 19:21:14.503221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.723 [2024-07-24 19:21:14.503503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.723 [2024-07-24 19:21:14.503527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.723 [2024-07-24 19:21:14.503549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.723 [2024-07-24 19:21:14.507620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.723 [2024-07-24 19:21:14.516775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.724 [2024-07-24 19:21:14.517343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.724 [2024-07-24 19:21:14.517398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.724 [2024-07-24 19:21:14.517423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.724 [2024-07-24 19:21:14.517707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.724 [2024-07-24 19:21:14.517976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.724 [2024-07-24 19:21:14.517999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.724 [2024-07-24 19:21:14.518014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.724 [2024-07-24 19:21:14.522093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.724 [2024-07-24 19:21:14.531226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.724 [2024-07-24 19:21:14.531802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.724 [2024-07-24 19:21:14.531853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.724 [2024-07-24 19:21:14.531870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.724 [2024-07-24 19:21:14.532134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.724 [2024-07-24 19:21:14.532401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.724 [2024-07-24 19:21:14.532423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.724 [2024-07-24 19:21:14.532438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.724 [2024-07-24 19:21:14.536521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.724 [2024-07-24 19:21:14.545651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.724 [2024-07-24 19:21:14.546162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.724 [2024-07-24 19:21:14.546203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.724 [2024-07-24 19:21:14.546222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.724 [2024-07-24 19:21:14.546505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.724 [2024-07-24 19:21:14.546774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.724 [2024-07-24 19:21:14.546796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.724 [2024-07-24 19:21:14.546811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.724 [2024-07-24 19:21:14.550909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.724 [2024-07-24 19:21:14.560023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.724 [2024-07-24 19:21:14.560564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.724 [2024-07-24 19:21:14.560605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.724 [2024-07-24 19:21:14.560624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.724 [2024-07-24 19:21:14.560894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.724 [2024-07-24 19:21:14.561162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.724 [2024-07-24 19:21:14.561195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.724 [2024-07-24 19:21:14.561211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.724 [2024-07-24 19:21:14.565278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.724 [2024-07-24 19:21:14.574401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.724 [2024-07-24 19:21:14.575007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.724 [2024-07-24 19:21:14.575049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.724 [2024-07-24 19:21:14.575068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.724 [2024-07-24 19:21:14.575339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.724 [2024-07-24 19:21:14.575621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.724 [2024-07-24 19:21:14.575644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.724 [2024-07-24 19:21:14.575660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.724 [2024-07-24 19:21:14.579756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.724 [2024-07-24 19:21:14.588880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.724 [2024-07-24 19:21:14.589412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.724 [2024-07-24 19:21:14.589460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.724 [2024-07-24 19:21:14.589478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.724 [2024-07-24 19:21:14.589755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.724 [2024-07-24 19:21:14.590022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.724 [2024-07-24 19:21:14.590044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.724 [2024-07-24 19:21:14.590059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.724 [2024-07-24 19:21:14.594331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.724 [2024-07-24 19:21:14.603515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.724 [2024-07-24 19:21:14.604100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.724 [2024-07-24 19:21:14.604141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.724 [2024-07-24 19:21:14.604160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.724 [2024-07-24 19:21:14.604431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.724 [2024-07-24 19:21:14.604712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.724 [2024-07-24 19:21:14.604735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.724 [2024-07-24 19:21:14.604750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.724 [2024-07-24 19:21:14.608847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.724 [2024-07-24 19:21:14.618014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.724 [2024-07-24 19:21:14.618525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.724 [2024-07-24 19:21:14.618557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.724 [2024-07-24 19:21:14.618574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.724 [2024-07-24 19:21:14.618839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.724 [2024-07-24 19:21:14.619105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.724 [2024-07-24 19:21:14.619128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.724 [2024-07-24 19:21:14.619143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.724 [2024-07-24 19:21:14.623203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.724 [2024-07-24 19:21:14.632557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.724 [2024-07-24 19:21:14.633076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.724 [2024-07-24 19:21:14.633117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.724 [2024-07-24 19:21:14.633136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.724 [2024-07-24 19:21:14.633406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.724 [2024-07-24 19:21:14.633686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.724 [2024-07-24 19:21:14.633710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.724 [2024-07-24 19:21:14.633725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.724 [2024-07-24 19:21:14.637803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.724 [2024-07-24 19:21:14.647105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.725 [2024-07-24 19:21:14.647667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.725 [2024-07-24 19:21:14.647708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.725 [2024-07-24 19:21:14.647728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.725 [2024-07-24 19:21:14.647998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.725 [2024-07-24 19:21:14.648266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.725 [2024-07-24 19:21:14.648288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.725 [2024-07-24 19:21:14.648303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.725 [2024-07-24 19:21:14.652368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.725 [2024-07-24 19:21:14.661538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.725 [2024-07-24 19:21:14.662134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.725 [2024-07-24 19:21:14.662175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.725 [2024-07-24 19:21:14.662194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.725 [2024-07-24 19:21:14.662473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.725 [2024-07-24 19:21:14.662755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.725 [2024-07-24 19:21:14.662777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.725 [2024-07-24 19:21:14.662792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.725 [2024-07-24 19:21:14.666880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.725 [2024-07-24 19:21:14.676031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.725 [2024-07-24 19:21:14.676670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.725 [2024-07-24 19:21:14.676712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.725 [2024-07-24 19:21:14.676731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.725 [2024-07-24 19:21:14.677002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.725 [2024-07-24 19:21:14.677270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.725 [2024-07-24 19:21:14.677292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.725 [2024-07-24 19:21:14.677307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.725 [2024-07-24 19:21:14.681386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.725 [2024-07-24 19:21:14.690507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.725 [2024-07-24 19:21:14.691074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.725 [2024-07-24 19:21:14.691128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.725 [2024-07-24 19:21:14.691147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.725 [2024-07-24 19:21:14.691417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.725 [2024-07-24 19:21:14.691703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.725 [2024-07-24 19:21:14.691726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.725 [2024-07-24 19:21:14.691742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.725 [2024-07-24 19:21:14.695813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.725 [2024-07-24 19:21:14.704963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.725 [2024-07-24 19:21:14.705421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.725 [2024-07-24 19:21:14.705468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.725 [2024-07-24 19:21:14.705500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.725 [2024-07-24 19:21:14.705776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.725 [2024-07-24 19:21:14.706045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.725 [2024-07-24 19:21:14.706067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.725 [2024-07-24 19:21:14.706088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.725 [2024-07-24 19:21:14.710146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.725 [2024-07-24 19:21:14.719501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.725 [2024-07-24 19:21:14.720099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.725 [2024-07-24 19:21:14.720154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.725 [2024-07-24 19:21:14.720173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.725 [2024-07-24 19:21:14.720444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.725 [2024-07-24 19:21:14.720731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.725 [2024-07-24 19:21:14.720755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.725 [2024-07-24 19:21:14.720770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.725 [2024-07-24 19:21:14.724840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.725 [2024-07-24 19:21:14.734011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.725 [2024-07-24 19:21:14.734477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.725 [2024-07-24 19:21:14.734550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.725 [2024-07-24 19:21:14.734579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.725 [2024-07-24 19:21:14.734865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.725 [2024-07-24 19:21:14.735149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.725 [2024-07-24 19:21:14.735174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.725 [2024-07-24 19:21:14.735190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.985 [2024-07-24 19:21:14.739342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.985 [2024-07-24 19:21:14.748527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.985 [2024-07-24 19:21:14.749108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.985 [2024-07-24 19:21:14.749150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.985 [2024-07-24 19:21:14.749169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.985 [2024-07-24 19:21:14.749440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.985 [2024-07-24 19:21:14.749719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.985 [2024-07-24 19:21:14.749742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.985 [2024-07-24 19:21:14.749758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.985 [2024-07-24 19:21:14.753802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.985 [2024-07-24 19:21:14.762908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.985 [2024-07-24 19:21:14.763431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.985 [2024-07-24 19:21:14.763478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.985 [2024-07-24 19:21:14.763518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.985 [2024-07-24 19:21:14.763788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.985 [2024-07-24 19:21:14.764057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.985 [2024-07-24 19:21:14.764079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.985 [2024-07-24 19:21:14.764095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.985 [2024-07-24 19:21:14.768157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.985 [2024-07-24 19:21:14.777311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.985 [2024-07-24 19:21:14.777910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.985 [2024-07-24 19:21:14.777952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.985 [2024-07-24 19:21:14.777971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.985 [2024-07-24 19:21:14.778242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.985 [2024-07-24 19:21:14.778523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.985 [2024-07-24 19:21:14.778546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.985 [2024-07-24 19:21:14.778561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.985 [2024-07-24 19:21:14.782615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.985 [2024-07-24 19:21:14.791727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.985 [2024-07-24 19:21:14.792312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.985 [2024-07-24 19:21:14.792353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.985 [2024-07-24 19:21:14.792371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.985 [2024-07-24 19:21:14.792656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.985 [2024-07-24 19:21:14.792925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.985 [2024-07-24 19:21:14.792947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.985 [2024-07-24 19:21:14.792963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.985 [2024-07-24 19:21:14.797027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.985 [2024-07-24 19:21:14.806136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.985 [2024-07-24 19:21:14.806735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.985 [2024-07-24 19:21:14.806792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.985 [2024-07-24 19:21:14.806811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.985 [2024-07-24 19:21:14.807081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.985 [2024-07-24 19:21:14.807358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.985 [2024-07-24 19:21:14.807380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.985 [2024-07-24 19:21:14.807396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.985 [2024-07-24 19:21:14.811459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.985 [2024-07-24 19:21:14.820549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.985 [2024-07-24 19:21:14.821081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.985 [2024-07-24 19:21:14.821144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.985 [2024-07-24 19:21:14.821161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.985 [2024-07-24 19:21:14.821431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.985 [2024-07-24 19:21:14.821708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.985 [2024-07-24 19:21:14.821731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.985 [2024-07-24 19:21:14.821746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.985 [2024-07-24 19:21:14.825789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.985 [2024-07-24 19:21:14.835130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.985 [2024-07-24 19:21:14.835584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.985 [2024-07-24 19:21:14.835625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.985 [2024-07-24 19:21:14.835645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.986 [2024-07-24 19:21:14.835915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.986 [2024-07-24 19:21:14.836189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.986 [2024-07-24 19:21:14.836211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.986 [2024-07-24 19:21:14.836226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.986 [2024-07-24 19:21:14.840277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.986 [2024-07-24 19:21:14.849714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.986 [2024-07-24 19:21:14.850288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.986 [2024-07-24 19:21:14.850329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.986 [2024-07-24 19:21:14.850348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.986 [2024-07-24 19:21:14.850629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.986 [2024-07-24 19:21:14.850898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.986 [2024-07-24 19:21:14.850920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.986 [2024-07-24 19:21:14.850936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.986 [2024-07-24 19:21:14.854992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.986 [2024-07-24 19:21:14.864071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.986 [2024-07-24 19:21:14.864567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.986 [2024-07-24 19:21:14.864600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.986 [2024-07-24 19:21:14.864617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.986 [2024-07-24 19:21:14.864882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.986 [2024-07-24 19:21:14.865149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.986 [2024-07-24 19:21:14.865171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.986 [2024-07-24 19:21:14.865186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.986 [2024-07-24 19:21:14.869238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.986 [2024-07-24 19:21:14.878572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.986 [2024-07-24 19:21:14.879099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.986 [2024-07-24 19:21:14.879139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.986 [2024-07-24 19:21:14.879158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.986 [2024-07-24 19:21:14.879429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.986 [2024-07-24 19:21:14.879709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.986 [2024-07-24 19:21:14.879733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.986 [2024-07-24 19:21:14.879748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.986 [2024-07-24 19:21:14.883794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.986 [2024-07-24 19:21:14.893127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.986 [2024-07-24 19:21:14.893704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.986 [2024-07-24 19:21:14.893746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.986 [2024-07-24 19:21:14.893765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.986 [2024-07-24 19:21:14.894035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.986 [2024-07-24 19:21:14.894303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.986 [2024-07-24 19:21:14.894325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.986 [2024-07-24 19:21:14.894342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.986 [2024-07-24 19:21:14.898403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.986 [2024-07-24 19:21:14.907511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.986 [2024-07-24 19:21:14.908019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.986 [2024-07-24 19:21:14.908059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.986 [2024-07-24 19:21:14.908084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.986 [2024-07-24 19:21:14.908356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.986 [2024-07-24 19:21:14.908637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.986 [2024-07-24 19:21:14.908659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.986 [2024-07-24 19:21:14.908675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.986 [2024-07-24 19:21:14.912749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.986 [2024-07-24 19:21:14.922063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.986 [2024-07-24 19:21:14.922545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.986 [2024-07-24 19:21:14.922577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.986 [2024-07-24 19:21:14.922595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.986 [2024-07-24 19:21:14.922859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.986 [2024-07-24 19:21:14.923126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.986 [2024-07-24 19:21:14.923148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.986 [2024-07-24 19:21:14.923163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.986 [2024-07-24 19:21:14.927217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.986 [2024-07-24 19:21:14.936583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.986 [2024-07-24 19:21:14.937153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.986 [2024-07-24 19:21:14.937194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.986 [2024-07-24 19:21:14.937213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.986 [2024-07-24 19:21:14.937496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.986 [2024-07-24 19:21:14.937764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.986 [2024-07-24 19:21:14.937787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.986 [2024-07-24 19:21:14.937802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.986 [2024-07-24 19:21:14.941849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.986 [2024-07-24 19:21:14.950917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.986 [2024-07-24 19:21:14.951450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.986 [2024-07-24 19:21:14.951511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.986 [2024-07-24 19:21:14.951531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.986 [2024-07-24 19:21:14.951801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.986 [2024-07-24 19:21:14.952069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.986 [2024-07-24 19:21:14.952098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.986 [2024-07-24 19:21:14.952115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.986 [2024-07-24 19:21:14.956161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.986 [2024-07-24 19:21:14.965271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.986 [2024-07-24 19:21:14.965853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.986 [2024-07-24 19:21:14.965894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.986 [2024-07-24 19:21:14.965913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.986 [2024-07-24 19:21:14.966184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.986 [2024-07-24 19:21:14.966452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.986 [2024-07-24 19:21:14.966474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.986 [2024-07-24 19:21:14.966502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.986 [2024-07-24 19:21:14.970559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.986 [2024-07-24 19:21:14.979645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.987 [2024-07-24 19:21:14.980199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.987 [2024-07-24 19:21:14.980240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.987 [2024-07-24 19:21:14.980258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.987 [2024-07-24 19:21:14.980544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.987 [2024-07-24 19:21:14.980813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.987 [2024-07-24 19:21:14.980835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.987 [2024-07-24 19:21:14.980850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.987 [2024-07-24 19:21:14.984912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.987 [2024-07-24 19:21:14.994072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.987 [2024-07-24 19:21:14.994693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.987 [2024-07-24 19:21:14.994735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:08.987 [2024-07-24 19:21:14.994755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:08.987 [2024-07-24 19:21:14.995025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:08.987 [2024-07-24 19:21:14.995321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.987 [2024-07-24 19:21:14.995356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.987 [2024-07-24 19:21:14.995382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.245 [2024-07-24 19:21:14.999580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.245 [2024-07-24 19:21:15.008563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.245 [2024-07-24 19:21:15.008950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.245 [2024-07-24 19:21:15.008982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.245 [2024-07-24 19:21:15.009000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.245 [2024-07-24 19:21:15.009265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.245 [2024-07-24 19:21:15.009544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.245 [2024-07-24 19:21:15.009567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.245 [2024-07-24 19:21:15.009583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.245 [2024-07-24 19:21:15.013625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.245 [2024-07-24 19:21:15.022921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.245 [2024-07-24 19:21:15.023461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.245 [2024-07-24 19:21:15.023511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.245 [2024-07-24 19:21:15.023531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.245 [2024-07-24 19:21:15.023808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.245 [2024-07-24 19:21:15.024076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.245 [2024-07-24 19:21:15.024099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.245 [2024-07-24 19:21:15.024114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.245 [2024-07-24 19:21:15.028176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.245 [2024-07-24 19:21:15.037266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.245 [2024-07-24 19:21:15.037854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.245 [2024-07-24 19:21:15.037896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.245 [2024-07-24 19:21:15.037914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.245 [2024-07-24 19:21:15.038185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.245 [2024-07-24 19:21:15.038453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.245 [2024-07-24 19:21:15.038475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.245 [2024-07-24 19:21:15.038504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.245 [2024-07-24 19:21:15.042560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.245 [2024-07-24 19:21:15.051676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.245 [2024-07-24 19:21:15.052197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.245 [2024-07-24 19:21:15.052251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.245 [2024-07-24 19:21:15.052271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.245 [2024-07-24 19:21:15.052561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.245 [2024-07-24 19:21:15.052831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.245 [2024-07-24 19:21:15.052853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.245 [2024-07-24 19:21:15.052868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.245 [2024-07-24 19:21:15.056912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.245 [2024-07-24 19:21:15.066255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.245 [2024-07-24 19:21:15.066757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.245 [2024-07-24 19:21:15.066837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.245 [2024-07-24 19:21:15.066856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.246 [2024-07-24 19:21:15.067126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.246 [2024-07-24 19:21:15.067394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.246 [2024-07-24 19:21:15.067417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.246 [2024-07-24 19:21:15.067432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.246 [2024-07-24 19:21:15.071499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.246 [2024-07-24 19:21:15.080824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.246 [2024-07-24 19:21:15.081336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.246 [2024-07-24 19:21:15.081390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.246 [2024-07-24 19:21:15.081408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.246 [2024-07-24 19:21:15.081697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.246 [2024-07-24 19:21:15.081966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.246 [2024-07-24 19:21:15.081988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.246 [2024-07-24 19:21:15.082004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.246 [2024-07-24 19:21:15.086065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.246 [2024-07-24 19:21:15.095191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.246 [2024-07-24 19:21:15.095729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.246 [2024-07-24 19:21:15.095771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.246 [2024-07-24 19:21:15.095790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.246 [2024-07-24 19:21:15.096066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.246 [2024-07-24 19:21:15.096348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.246 [2024-07-24 19:21:15.096371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.246 [2024-07-24 19:21:15.096395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.246 [2024-07-24 19:21:15.100507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.246 [2024-07-24 19:21:15.109633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.246 [2024-07-24 19:21:15.110204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.246 [2024-07-24 19:21:15.110245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.246 [2024-07-24 19:21:15.110264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.246 [2024-07-24 19:21:15.110545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.246 [2024-07-24 19:21:15.110815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.246 [2024-07-24 19:21:15.110837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.246 [2024-07-24 19:21:15.110852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.246 [2024-07-24 19:21:15.114899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.246 [2024-07-24 19:21:15.123992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.246 [2024-07-24 19:21:15.124465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.246 [2024-07-24 19:21:15.124528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.246 [2024-07-24 19:21:15.124545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.246 [2024-07-24 19:21:15.124810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.246 [2024-07-24 19:21:15.125077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.246 [2024-07-24 19:21:15.125099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.246 [2024-07-24 19:21:15.125114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.246 [2024-07-24 19:21:15.129174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.246 [2024-07-24 19:21:15.138500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.246 [2024-07-24 19:21:15.138928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.246 [2024-07-24 19:21:15.138982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.246 [2024-07-24 19:21:15.138999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.246 [2024-07-24 19:21:15.139262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.246 [2024-07-24 19:21:15.139541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.246 [2024-07-24 19:21:15.139564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.246 [2024-07-24 19:21:15.139580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.246 [2024-07-24 19:21:15.143621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.246 [2024-07-24 19:21:15.152966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.246 [2024-07-24 19:21:15.153512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.246 [2024-07-24 19:21:15.153567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.246 [2024-07-24 19:21:15.153586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.246 [2024-07-24 19:21:15.153857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.246 [2024-07-24 19:21:15.154124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.246 [2024-07-24 19:21:15.154146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.246 [2024-07-24 19:21:15.154161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.246 [2024-07-24 19:21:15.158207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.246 [2024-07-24 19:21:15.167565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.246 [2024-07-24 19:21:15.168036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.246 [2024-07-24 19:21:15.168085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.246 [2024-07-24 19:21:15.168102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.246 [2024-07-24 19:21:15.168366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.246 [2024-07-24 19:21:15.168649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.246 [2024-07-24 19:21:15.168671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.246 [2024-07-24 19:21:15.168687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.246 [2024-07-24 19:21:15.172742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.246 [2024-07-24 19:21:15.182074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.246 [2024-07-24 19:21:15.182561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.246 [2024-07-24 19:21:15.182591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.246 [2024-07-24 19:21:15.182609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.246 [2024-07-24 19:21:15.182872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.246 [2024-07-24 19:21:15.183139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.246 [2024-07-24 19:21:15.183160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.246 [2024-07-24 19:21:15.183175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.246 [2024-07-24 19:21:15.187224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.246 [2024-07-24 19:21:15.196570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.246 [2024-07-24 19:21:15.197061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.246 [2024-07-24 19:21:15.197114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.246 [2024-07-24 19:21:15.197134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.246 [2024-07-24 19:21:15.197418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.247 [2024-07-24 19:21:15.197698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.247 [2024-07-24 19:21:15.197722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.247 [2024-07-24 19:21:15.197738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.247 [2024-07-24 19:21:15.201792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.247 [2024-07-24 19:21:15.211094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.247 [2024-07-24 19:21:15.211692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.247 [2024-07-24 19:21:15.211733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.247 [2024-07-24 19:21:15.211753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.247 [2024-07-24 19:21:15.212023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.247 [2024-07-24 19:21:15.212292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.247 [2024-07-24 19:21:15.212314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.247 [2024-07-24 19:21:15.212329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.247 [2024-07-24 19:21:15.216372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.247 [2024-07-24 19:21:15.225470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.247 [2024-07-24 19:21:15.226010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.247 [2024-07-24 19:21:15.226050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.247 [2024-07-24 19:21:15.226070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.247 [2024-07-24 19:21:15.226340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.247 [2024-07-24 19:21:15.226621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.247 [2024-07-24 19:21:15.226644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.247 [2024-07-24 19:21:15.226659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.247 [2024-07-24 19:21:15.230705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.247 [2024-07-24 19:21:15.240041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.247 [2024-07-24 19:21:15.240568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.247 [2024-07-24 19:21:15.240610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.247 [2024-07-24 19:21:15.240629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.247 [2024-07-24 19:21:15.240900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.247 [2024-07-24 19:21:15.241168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.247 [2024-07-24 19:21:15.241190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.247 [2024-07-24 19:21:15.241211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.247 [2024-07-24 19:21:15.245283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.247 [2024-07-24 19:21:15.254623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.247 [2024-07-24 19:21:15.255193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.247 [2024-07-24 19:21:15.255265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.247 [2024-07-24 19:21:15.255288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.247 [2024-07-24 19:21:15.255573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.247 [2024-07-24 19:21:15.255857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.247 [2024-07-24 19:21:15.255883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.247 [2024-07-24 19:21:15.255907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.506 [2024-07-24 19:21:15.260155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.506 [2024-07-24 19:21:15.269113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.506 [2024-07-24 19:21:15.269612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.506 [2024-07-24 19:21:15.269652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.506 [2024-07-24 19:21:15.269672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.506 [2024-07-24 19:21:15.269942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.506 [2024-07-24 19:21:15.270210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.506 [2024-07-24 19:21:15.270232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.506 [2024-07-24 19:21:15.270248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.506 [2024-07-24 19:21:15.274296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.506 [2024-07-24 19:21:15.283617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.506 [2024-07-24 19:21:15.284022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.506 [2024-07-24 19:21:15.284107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.506 [2024-07-24 19:21:15.284126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.506 [2024-07-24 19:21:15.284391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.506 [2024-07-24 19:21:15.284668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.506 [2024-07-24 19:21:15.284692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.506 [2024-07-24 19:21:15.284708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.506 [2024-07-24 19:21:15.288753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.506 [2024-07-24 19:21:15.298053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.506 [2024-07-24 19:21:15.298534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.506 [2024-07-24 19:21:15.298570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.506 [2024-07-24 19:21:15.298588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.506 [2024-07-24 19:21:15.298852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.506 [2024-07-24 19:21:15.299118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.506 [2024-07-24 19:21:15.299140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.506 [2024-07-24 19:21:15.299155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.506 [2024-07-24 19:21:15.303243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.506 [2024-07-24 19:21:15.312602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.506 [2024-07-24 19:21:15.313097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.506 [2024-07-24 19:21:15.313139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.506 [2024-07-24 19:21:15.313157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.506 [2024-07-24 19:21:15.313428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.506 [2024-07-24 19:21:15.313708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.506 [2024-07-24 19:21:15.313732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.506 [2024-07-24 19:21:15.313747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.506 [2024-07-24 19:21:15.317800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.506 [2024-07-24 19:21:15.327111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.506 [2024-07-24 19:21:15.327674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.506 [2024-07-24 19:21:15.327715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.506 [2024-07-24 19:21:15.327734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.506 [2024-07-24 19:21:15.328004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.506 [2024-07-24 19:21:15.328272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.506 [2024-07-24 19:21:15.328295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.506 [2024-07-24 19:21:15.328311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.506 [2024-07-24 19:21:15.332355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.506 [2024-07-24 19:21:15.341669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.506 [2024-07-24 19:21:15.342181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.506 [2024-07-24 19:21:15.342231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.506 [2024-07-24 19:21:15.342249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.506 [2024-07-24 19:21:15.342524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.506 [2024-07-24 19:21:15.342797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.506 [2024-07-24 19:21:15.342820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.506 [2024-07-24 19:21:15.342835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.506 [2024-07-24 19:21:15.346898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.506 [2024-07-24 19:21:15.356046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.506 [2024-07-24 19:21:15.356639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.506 [2024-07-24 19:21:15.356681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.506 [2024-07-24 19:21:15.356700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.506 [2024-07-24 19:21:15.356970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.506 [2024-07-24 19:21:15.357239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.506 [2024-07-24 19:21:15.357261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.506 [2024-07-24 19:21:15.357276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.506 [2024-07-24 19:21:15.361327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.506 [2024-07-24 19:21:15.370394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.506 [2024-07-24 19:21:15.370966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.506 [2024-07-24 19:21:15.371008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.506 [2024-07-24 19:21:15.371027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.506 [2024-07-24 19:21:15.371297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.506 [2024-07-24 19:21:15.371579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.506 [2024-07-24 19:21:15.371611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.506 [2024-07-24 19:21:15.371626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.506 [2024-07-24 19:21:15.375669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.506 [2024-07-24 19:21:15.384761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.506 [2024-07-24 19:21:15.385277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.506 [2024-07-24 19:21:15.385324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.506 [2024-07-24 19:21:15.385343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.506 [2024-07-24 19:21:15.385619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.506 [2024-07-24 19:21:15.385887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.506 [2024-07-24 19:21:15.385909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.506 [2024-07-24 19:21:15.385925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.506 [2024-07-24 19:21:15.390022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.506 [2024-07-24 19:21:15.399100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.506 [2024-07-24 19:21:15.399525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.506 [2024-07-24 19:21:15.399567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.507 [2024-07-24 19:21:15.399586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.507 [2024-07-24 19:21:15.399858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.507 [2024-07-24 19:21:15.400127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.507 [2024-07-24 19:21:15.400150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.507 [2024-07-24 19:21:15.400166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.507 [2024-07-24 19:21:15.404211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.507 [2024-07-24 19:21:15.413521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.507 [2024-07-24 19:21:15.413945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.507 [2024-07-24 19:21:15.413977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.507 [2024-07-24 19:21:15.413994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.507 [2024-07-24 19:21:15.414258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.507 [2024-07-24 19:21:15.414535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.507 [2024-07-24 19:21:15.414559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.507 [2024-07-24 19:21:15.414574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.507 [2024-07-24 19:21:15.418616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.507 [2024-07-24 19:21:15.427920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.507 [2024-07-24 19:21:15.428323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.507 [2024-07-24 19:21:15.428353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.507 [2024-07-24 19:21:15.428370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.507 [2024-07-24 19:21:15.428643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.507 [2024-07-24 19:21:15.428910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.507 [2024-07-24 19:21:15.428933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.507 [2024-07-24 19:21:15.428948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.507 [2024-07-24 19:21:15.432989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.507 [2024-07-24 19:21:15.442313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.507 [2024-07-24 19:21:15.442845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.507 [2024-07-24 19:21:15.442887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.507 [2024-07-24 19:21:15.442916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.507 [2024-07-24 19:21:15.443188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.507 [2024-07-24 19:21:15.443457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.507 [2024-07-24 19:21:15.443490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.507 [2024-07-24 19:21:15.443508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.507 [2024-07-24 19:21:15.447552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.507 [2024-07-24 19:21:15.456656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.507 [2024-07-24 19:21:15.457181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.507 [2024-07-24 19:21:15.457224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.507 [2024-07-24 19:21:15.457243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.507 [2024-07-24 19:21:15.457516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.507 [2024-07-24 19:21:15.457784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.507 [2024-07-24 19:21:15.457806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.507 [2024-07-24 19:21:15.457821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.507 [2024-07-24 19:21:15.461862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.507 [2024-07-24 19:21:15.471173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.507 [2024-07-24 19:21:15.471664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.507 [2024-07-24 19:21:15.471705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.507 [2024-07-24 19:21:15.471723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.507 [2024-07-24 19:21:15.471994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.507 [2024-07-24 19:21:15.472262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.507 [2024-07-24 19:21:15.472284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.507 [2024-07-24 19:21:15.472301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.507 [2024-07-24 19:21:15.476371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.507 [2024-07-24 19:21:15.485681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.507 [2024-07-24 19:21:15.486158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.507 [2024-07-24 19:21:15.486189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.507 [2024-07-24 19:21:15.486207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.507 [2024-07-24 19:21:15.486471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.507 [2024-07-24 19:21:15.486749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.507 [2024-07-24 19:21:15.486777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.507 [2024-07-24 19:21:15.486793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.507 [2024-07-24 19:21:15.490834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.507 [2024-07-24 19:21:15.500130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.507 [2024-07-24 19:21:15.500605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.507 [2024-07-24 19:21:15.500636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.507 [2024-07-24 19:21:15.500653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.507 [2024-07-24 19:21:15.500917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.507 [2024-07-24 19:21:15.501183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.507 [2024-07-24 19:21:15.501206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.507 [2024-07-24 19:21:15.501221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.507 [2024-07-24 19:21:15.505263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.507 [2024-07-24 19:21:15.514641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.507 [2024-07-24 19:21:15.515168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.507 [2024-07-24 19:21:15.515221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.507 [2024-07-24 19:21:15.515239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.507 [2024-07-24 19:21:15.515516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.507 [2024-07-24 19:21:15.515784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.507 [2024-07-24 19:21:15.515814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.507 [2024-07-24 19:21:15.515839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.767 [2024-07-24 19:21:15.520075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.767 [2024-07-24 19:21:15.529035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.767 [2024-07-24 19:21:15.529579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.767 [2024-07-24 19:21:15.529621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.767 [2024-07-24 19:21:15.529645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.767 [2024-07-24 19:21:15.529915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.767 [2024-07-24 19:21:15.530183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.767 [2024-07-24 19:21:15.530205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.767 [2024-07-24 19:21:15.530220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.767 [2024-07-24 19:21:15.534297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.767 [2024-07-24 19:21:15.543456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.767 [2024-07-24 19:21:15.544025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.767 [2024-07-24 19:21:15.544066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.767 [2024-07-24 19:21:15.544085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.767 [2024-07-24 19:21:15.544356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.767 [2024-07-24 19:21:15.544635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.767 [2024-07-24 19:21:15.544658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.767 [2024-07-24 19:21:15.544673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.767 [2024-07-24 19:21:15.548724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.767 [2024-07-24 19:21:15.557853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.767 [2024-07-24 19:21:15.558438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.767 [2024-07-24 19:21:15.558491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.767 [2024-07-24 19:21:15.558513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.767 [2024-07-24 19:21:15.558783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.767 [2024-07-24 19:21:15.559058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.767 [2024-07-24 19:21:15.559081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.767 [2024-07-24 19:21:15.559096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.767 [2024-07-24 19:21:15.563173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.767 [2024-07-24 19:21:15.572358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.767 [2024-07-24 19:21:15.572914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.767 [2024-07-24 19:21:15.572965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.767 [2024-07-24 19:21:15.572982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.767 [2024-07-24 19:21:15.573246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.767 [2024-07-24 19:21:15.573526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.767 [2024-07-24 19:21:15.573549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.767 [2024-07-24 19:21:15.573564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.767 [2024-07-24 19:21:15.577641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.767 [2024-07-24 19:21:15.586976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.767 [2024-07-24 19:21:15.587460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.767 [2024-07-24 19:21:15.587523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.767 [2024-07-24 19:21:15.587541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.767 [2024-07-24 19:21:15.587817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.767 [2024-07-24 19:21:15.588083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.767 [2024-07-24 19:21:15.588105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.767 [2024-07-24 19:21:15.588121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.767 [2024-07-24 19:21:15.592176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.767 [2024-07-24 19:21:15.601564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.767 [2024-07-24 19:21:15.602044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.767 [2024-07-24 19:21:15.602092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.767 [2024-07-24 19:21:15.602114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.767 [2024-07-24 19:21:15.602385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.767 [2024-07-24 19:21:15.602672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.767 [2024-07-24 19:21:15.602695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.767 [2024-07-24 19:21:15.602711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.767 [2024-07-24 19:21:15.606785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.767 [2024-07-24 19:21:15.616163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.767 [2024-07-24 19:21:15.616764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.767 [2024-07-24 19:21:15.616806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.767 [2024-07-24 19:21:15.616825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.767 [2024-07-24 19:21:15.617095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.767 [2024-07-24 19:21:15.617364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.767 [2024-07-24 19:21:15.617386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.767 [2024-07-24 19:21:15.617402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.767 [2024-07-24 19:21:15.621513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.767 [2024-07-24 19:21:15.630716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.767 [2024-07-24 19:21:15.631240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.767 [2024-07-24 19:21:15.631282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.767 [2024-07-24 19:21:15.631300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.767 [2024-07-24 19:21:15.631585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.767 [2024-07-24 19:21:15.631853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.767 [2024-07-24 19:21:15.631876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.767 [2024-07-24 19:21:15.631906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.768 [2024-07-24 19:21:15.636019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.768 [2024-07-24 19:21:15.645130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.768 [2024-07-24 19:21:15.645709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-07-24 19:21:15.645750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.768 [2024-07-24 19:21:15.645768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.768 [2024-07-24 19:21:15.646039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.768 [2024-07-24 19:21:15.646313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.768 [2024-07-24 19:21:15.646335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.768 [2024-07-24 19:21:15.646351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.768 [2024-07-24 19:21:15.650420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.768 [2024-07-24 19:21:15.659531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.768 [2024-07-24 19:21:15.660125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-07-24 19:21:15.660167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.768 [2024-07-24 19:21:15.660186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.768 [2024-07-24 19:21:15.660457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.768 [2024-07-24 19:21:15.660738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.768 [2024-07-24 19:21:15.660762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.768 [2024-07-24 19:21:15.660777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.768 [2024-07-24 19:21:15.664880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.768 [2024-07-24 19:21:15.674096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.768 [2024-07-24 19:21:15.674644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-07-24 19:21:15.674686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.768 [2024-07-24 19:21:15.674705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.768 [2024-07-24 19:21:15.674975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.768 [2024-07-24 19:21:15.675243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.768 [2024-07-24 19:21:15.675265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.768 [2024-07-24 19:21:15.675280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.768 [2024-07-24 19:21:15.679382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.768 [2024-07-24 19:21:15.688511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.768 [2024-07-24 19:21:15.688976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-07-24 19:21:15.689024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.768 [2024-07-24 19:21:15.689042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.768 [2024-07-24 19:21:15.689305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.768 [2024-07-24 19:21:15.689592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.768 [2024-07-24 19:21:15.689615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.768 [2024-07-24 19:21:15.689630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.768 [2024-07-24 19:21:15.693693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.768 [2024-07-24 19:21:15.703015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.768 [2024-07-24 19:21:15.703514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-07-24 19:21:15.703573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.768 [2024-07-24 19:21:15.703590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.768 [2024-07-24 19:21:15.703854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.768 [2024-07-24 19:21:15.704120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.768 [2024-07-24 19:21:15.704142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.768 [2024-07-24 19:21:15.704157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.768 [2024-07-24 19:21:15.708216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.768 [2024-07-24 19:21:15.717560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.768 [2024-07-24 19:21:15.718034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-07-24 19:21:15.718084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.768 [2024-07-24 19:21:15.718101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.768 [2024-07-24 19:21:15.718365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.768 [2024-07-24 19:21:15.718642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.768 [2024-07-24 19:21:15.718664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.768 [2024-07-24 19:21:15.718679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.768 [2024-07-24 19:21:15.722753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.768 [2024-07-24 19:21:15.732152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.768 [2024-07-24 19:21:15.732734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-07-24 19:21:15.732776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.768 [2024-07-24 19:21:15.732794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.768 [2024-07-24 19:21:15.733066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.768 [2024-07-24 19:21:15.733341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.768 [2024-07-24 19:21:15.733364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.768 [2024-07-24 19:21:15.733379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.768 [2024-07-24 19:21:15.737487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.768 [2024-07-24 19:21:15.746626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.768 [2024-07-24 19:21:15.747079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-07-24 19:21:15.747111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.768 [2024-07-24 19:21:15.747128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.768 [2024-07-24 19:21:15.747399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.768 [2024-07-24 19:21:15.747690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.768 [2024-07-24 19:21:15.747715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.768 [2024-07-24 19:21:15.747730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.768 [2024-07-24 19:21:15.751842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.768 [2024-07-24 19:21:15.761025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.768 [2024-07-24 19:21:15.761530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-07-24 19:21:15.761570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.768 [2024-07-24 19:21:15.761589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.768 [2024-07-24 19:21:15.761859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.768 [2024-07-24 19:21:15.762127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.768 [2024-07-24 19:21:15.762150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.768 [2024-07-24 19:21:15.762165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.768 [2024-07-24 19:21:15.766250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.768 [2024-07-24 19:21:15.775370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.768 [2024-07-24 19:21:15.775897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-07-24 19:21:15.775947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:09.768 [2024-07-24 19:21:15.775977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:09.768 [2024-07-24 19:21:15.776252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:09.768 [2024-07-24 19:21:15.776548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.768 [2024-07-24 19:21:15.776574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.768 [2024-07-24 19:21:15.776590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.027 [2024-07-24 19:21:15.780827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.027 [2024-07-24 19:21:15.789811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.027 [2024-07-24 19:21:15.790375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.027 [2024-07-24 19:21:15.790431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.027 [2024-07-24 19:21:15.790451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.027 [2024-07-24 19:21:15.790733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.028 [2024-07-24 19:21:15.791008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.028 [2024-07-24 19:21:15.791030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.028 [2024-07-24 19:21:15.791045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.028 [2024-07-24 19:21:15.795110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.028 [2024-07-24 19:21:15.804243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.028 [2024-07-24 19:21:15.804838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.028 [2024-07-24 19:21:15.804880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.028 [2024-07-24 19:21:15.804899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.028 [2024-07-24 19:21:15.805169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.028 [2024-07-24 19:21:15.805438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.028 [2024-07-24 19:21:15.805466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.028 [2024-07-24 19:21:15.805501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.028 [2024-07-24 19:21:15.809580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.028 [2024-07-24 19:21:15.818709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.028 [2024-07-24 19:21:15.819262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.028 [2024-07-24 19:21:15.819303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.028 [2024-07-24 19:21:15.819322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.028 [2024-07-24 19:21:15.819607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.028 [2024-07-24 19:21:15.819876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.028 [2024-07-24 19:21:15.819899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.028 [2024-07-24 19:21:15.819914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.028 [2024-07-24 19:21:15.824000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.028 [2024-07-24 19:21:15.833143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.028 [2024-07-24 19:21:15.833703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.028 [2024-07-24 19:21:15.833750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.028 [2024-07-24 19:21:15.833769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.028 [2024-07-24 19:21:15.834040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.028 [2024-07-24 19:21:15.834308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.028 [2024-07-24 19:21:15.834330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.028 [2024-07-24 19:21:15.834346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.028 [2024-07-24 19:21:15.838423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.028 [2024-07-24 19:21:15.847532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.028 [2024-07-24 19:21:15.848078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.028 [2024-07-24 19:21:15.848119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.028 [2024-07-24 19:21:15.848138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.028 [2024-07-24 19:21:15.848409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.028 [2024-07-24 19:21:15.848689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.028 [2024-07-24 19:21:15.848712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.028 [2024-07-24 19:21:15.848728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.028 [2024-07-24 19:21:15.852812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.028 [2024-07-24 19:21:15.861977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.028 [2024-07-24 19:21:15.862534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.028 [2024-07-24 19:21:15.862597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.028 [2024-07-24 19:21:15.862616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.028 [2024-07-24 19:21:15.862897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.028 [2024-07-24 19:21:15.863171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.028 [2024-07-24 19:21:15.863194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.028 [2024-07-24 19:21:15.863210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.028 [2024-07-24 19:21:15.867304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.028 [2024-07-24 19:21:15.876504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.028 [2024-07-24 19:21:15.877001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.028 [2024-07-24 19:21:15.877050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.028 [2024-07-24 19:21:15.877068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.028 [2024-07-24 19:21:15.877332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.028 [2024-07-24 19:21:15.877624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.028 [2024-07-24 19:21:15.877647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.028 [2024-07-24 19:21:15.877662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.028 [2024-07-24 19:21:15.881759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.028 [2024-07-24 19:21:15.890907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.028 [2024-07-24 19:21:15.891401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.028 [2024-07-24 19:21:15.891448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.028 [2024-07-24 19:21:15.891466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.028 [2024-07-24 19:21:15.891740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.028 [2024-07-24 19:21:15.892007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.028 [2024-07-24 19:21:15.892029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.028 [2024-07-24 19:21:15.892044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.028 [2024-07-24 19:21:15.896114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.028 [2024-07-24 19:21:15.905461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.028 [2024-07-24 19:21:15.905942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.028 [2024-07-24 19:21:15.905971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.028 [2024-07-24 19:21:15.905988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.028 [2024-07-24 19:21:15.906252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.028 [2024-07-24 19:21:15.906531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.028 [2024-07-24 19:21:15.906553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.028 [2024-07-24 19:21:15.906569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.028 [2024-07-24 19:21:15.910619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.028 [2024-07-24 19:21:15.919959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.028 [2024-07-24 19:21:15.920432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.028 [2024-07-24 19:21:15.920461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.028 [2024-07-24 19:21:15.920487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.028 [2024-07-24 19:21:15.920755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.028 [2024-07-24 19:21:15.921021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.028 [2024-07-24 19:21:15.921043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.028 [2024-07-24 19:21:15.921058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.028 [2024-07-24 19:21:15.925121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.028 [2024-07-24 19:21:15.934531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.029 [2024-07-24 19:21:15.935028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.029 [2024-07-24 19:21:15.935069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.029 [2024-07-24 19:21:15.935088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.029 [2024-07-24 19:21:15.935358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.029 [2024-07-24 19:21:15.935653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.029 [2024-07-24 19:21:15.935677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.029 [2024-07-24 19:21:15.935692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.029 [2024-07-24 19:21:15.939755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.029 [2024-07-24 19:21:15.949132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.029 [2024-07-24 19:21:15.949704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.029 [2024-07-24 19:21:15.949746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.029 [2024-07-24 19:21:15.949764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.029 [2024-07-24 19:21:15.950035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.029 [2024-07-24 19:21:15.950303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.029 [2024-07-24 19:21:15.950325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.029 [2024-07-24 19:21:15.950341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.029 [2024-07-24 19:21:15.954424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.029 [2024-07-24 19:21:15.963603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.029 [2024-07-24 19:21:15.964179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.029 [2024-07-24 19:21:15.964220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.029 [2024-07-24 19:21:15.964239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.029 [2024-07-24 19:21:15.964523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.029 [2024-07-24 19:21:15.964792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.029 [2024-07-24 19:21:15.964814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.029 [2024-07-24 19:21:15.964830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.029 [2024-07-24 19:21:15.968905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.029 [2024-07-24 19:21:15.978000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.029 [2024-07-24 19:21:15.978589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.029 [2024-07-24 19:21:15.978630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.029 [2024-07-24 19:21:15.978656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.029 [2024-07-24 19:21:15.978928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.029 [2024-07-24 19:21:15.979196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.029 [2024-07-24 19:21:15.979218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.029 [2024-07-24 19:21:15.979233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.029 [2024-07-24 19:21:15.983316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.029 [2024-07-24 19:21:15.992458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.029 [2024-07-24 19:21:15.993015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.029 [2024-07-24 19:21:15.993056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.029 [2024-07-24 19:21:15.993075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.029 [2024-07-24 19:21:15.993345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.029 [2024-07-24 19:21:15.993633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.029 [2024-07-24 19:21:15.993656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.029 [2024-07-24 19:21:15.993672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.029 [2024-07-24 19:21:15.997765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.029 [2024-07-24 19:21:16.006912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.029 [2024-07-24 19:21:16.007505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.029 [2024-07-24 19:21:16.007546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.029 [2024-07-24 19:21:16.007566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.029 [2024-07-24 19:21:16.007836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.029 [2024-07-24 19:21:16.008104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.029 [2024-07-24 19:21:16.008126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.029 [2024-07-24 19:21:16.008142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.029 [2024-07-24 19:21:16.012230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.029 [2024-07-24 19:21:16.021345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.029 [2024-07-24 19:21:16.021925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.029 [2024-07-24 19:21:16.021966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.029 [2024-07-24 19:21:16.021985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.029 [2024-07-24 19:21:16.022254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.029 [2024-07-24 19:21:16.022542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.029 [2024-07-24 19:21:16.022572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.029 [2024-07-24 19:21:16.022588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.029 [2024-07-24 19:21:16.026683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.029 [2024-07-24 19:21:16.035785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.029 [2024-07-24 19:21:16.036365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.029 [2024-07-24 19:21:16.036406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.029 [2024-07-24 19:21:16.036425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.029 [2024-07-24 19:21:16.036738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.029 [2024-07-24 19:21:16.037031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.029 [2024-07-24 19:21:16.037057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.029 [2024-07-24 19:21:16.037072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.288 [2024-07-24 19:21:16.041304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.288 [2024-07-24 19:21:16.050327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.289 [2024-07-24 19:21:16.050871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.289 [2024-07-24 19:21:16.050913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.289 [2024-07-24 19:21:16.050932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.289 [2024-07-24 19:21:16.051203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.289 [2024-07-24 19:21:16.051470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.289 [2024-07-24 19:21:16.051506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.289 [2024-07-24 19:21:16.051522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.289 [2024-07-24 19:21:16.055592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.289 [2024-07-24 19:21:16.064763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.289 [2024-07-24 19:21:16.065257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.289 [2024-07-24 19:21:16.065288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.289 [2024-07-24 19:21:16.065305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.289 [2024-07-24 19:21:16.065581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.289 [2024-07-24 19:21:16.065849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.289 [2024-07-24 19:21:16.065871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.289 [2024-07-24 19:21:16.065886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.289 [2024-07-24 19:21:16.069971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.289 [2024-07-24 19:21:16.079336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.289 [2024-07-24 19:21:16.079855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.289 [2024-07-24 19:21:16.079885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.289 [2024-07-24 19:21:16.079902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.289 [2024-07-24 19:21:16.080165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.289 [2024-07-24 19:21:16.080432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.289 [2024-07-24 19:21:16.080454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.289 [2024-07-24 19:21:16.080469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.289 [2024-07-24 19:21:16.084562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.289 [2024-07-24 19:21:16.093935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.289 [2024-07-24 19:21:16.094394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.289 [2024-07-24 19:21:16.094424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.289 [2024-07-24 19:21:16.094442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.289 [2024-07-24 19:21:16.094716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.289 [2024-07-24 19:21:16.094984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.289 [2024-07-24 19:21:16.095005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.289 [2024-07-24 19:21:16.095021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.289 [2024-07-24 19:21:16.099092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.289 [2024-07-24 19:21:16.108437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.289 [2024-07-24 19:21:16.108944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.289 [2024-07-24 19:21:16.108999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.289 [2024-07-24 19:21:16.109018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.289 [2024-07-24 19:21:16.109289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.289 [2024-07-24 19:21:16.109572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.289 [2024-07-24 19:21:16.109595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.289 [2024-07-24 19:21:16.109610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.289 [2024-07-24 19:21:16.113720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.289 [2024-07-24 19:21:16.122904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.289 [2024-07-24 19:21:16.123471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.289 [2024-07-24 19:21:16.123523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.289 [2024-07-24 19:21:16.123542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.289 [2024-07-24 19:21:16.123820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.289 [2024-07-24 19:21:16.124088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.289 [2024-07-24 19:21:16.124111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.289 [2024-07-24 19:21:16.124126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.289 [2024-07-24 19:21:16.128219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.289 [2024-07-24 19:21:16.137434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.289 [2024-07-24 19:21:16.137977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.289 [2024-07-24 19:21:16.138018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.289 [2024-07-24 19:21:16.138037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.289 [2024-07-24 19:21:16.138307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.289 [2024-07-24 19:21:16.138589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.289 [2024-07-24 19:21:16.138613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.289 [2024-07-24 19:21:16.138628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.289 [2024-07-24 19:21:16.142740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.289 [2024-07-24 19:21:16.151881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.289 [2024-07-24 19:21:16.152460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.289 [2024-07-24 19:21:16.152510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.289 [2024-07-24 19:21:16.152530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.289 [2024-07-24 19:21:16.152800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.289 [2024-07-24 19:21:16.153068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.289 [2024-07-24 19:21:16.153091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.289 [2024-07-24 19:21:16.153106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.289 [2024-07-24 19:21:16.157174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.289 [2024-07-24 19:21:16.166259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.289 [2024-07-24 19:21:16.166715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.289 [2024-07-24 19:21:16.166755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.289 [2024-07-24 19:21:16.166774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.289 [2024-07-24 19:21:16.167045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.289 [2024-07-24 19:21:16.167313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.289 [2024-07-24 19:21:16.167335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.289 [2024-07-24 19:21:16.167358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.289 [2024-07-24 19:21:16.171468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.289 [2024-07-24 19:21:16.180845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.289 [2024-07-24 19:21:16.181325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.290 [2024-07-24 19:21:16.181366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.290 [2024-07-24 19:21:16.181385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.290 [2024-07-24 19:21:16.181669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.290 [2024-07-24 19:21:16.181938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.290 [2024-07-24 19:21:16.181960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.290 [2024-07-24 19:21:16.181976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.290 [2024-07-24 19:21:16.186067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.290 [2024-07-24 19:21:16.195409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.290 [2024-07-24 19:21:16.196031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.290 [2024-07-24 19:21:16.196073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.290 [2024-07-24 19:21:16.196091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.290 [2024-07-24 19:21:16.196362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.290 [2024-07-24 19:21:16.196653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.290 [2024-07-24 19:21:16.196677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.290 [2024-07-24 19:21:16.196692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.290 [2024-07-24 19:21:16.200779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.290 [2024-07-24 19:21:16.209944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.290 [2024-07-24 19:21:16.210442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.290 [2024-07-24 19:21:16.210531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.290 [2024-07-24 19:21:16.210551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.290 [2024-07-24 19:21:16.210815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.290 [2024-07-24 19:21:16.211082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.290 [2024-07-24 19:21:16.211104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.290 [2024-07-24 19:21:16.211119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.290 [2024-07-24 19:21:16.215207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.290 [2024-07-24 19:21:16.224296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.290 [2024-07-24 19:21:16.224825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.290 [2024-07-24 19:21:16.224867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.290 [2024-07-24 19:21:16.224886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.290 [2024-07-24 19:21:16.225157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.290 [2024-07-24 19:21:16.225425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.290 [2024-07-24 19:21:16.225447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.290 [2024-07-24 19:21:16.225463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.290 [2024-07-24 19:21:16.229523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.290 [2024-07-24 19:21:16.238844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.290 [2024-07-24 19:21:16.239408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.290 [2024-07-24 19:21:16.239450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.290 [2024-07-24 19:21:16.239468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.290 [2024-07-24 19:21:16.239750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.290 [2024-07-24 19:21:16.240019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.290 [2024-07-24 19:21:16.240041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.290 [2024-07-24 19:21:16.240056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.290 [2024-07-24 19:21:16.244138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.290 [2024-07-24 19:21:16.253276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.290 [2024-07-24 19:21:16.253829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.290 [2024-07-24 19:21:16.253872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.290 [2024-07-24 19:21:16.253890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.290 [2024-07-24 19:21:16.254160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.290 [2024-07-24 19:21:16.254428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.290 [2024-07-24 19:21:16.254451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.290 [2024-07-24 19:21:16.254466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.290 [2024-07-24 19:21:16.258601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.290 [2024-07-24 19:21:16.267768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.290 [2024-07-24 19:21:16.268280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.290 [2024-07-24 19:21:16.268320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.290 [2024-07-24 19:21:16.268339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.290 [2024-07-24 19:21:16.268623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.290 [2024-07-24 19:21:16.268900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.290 [2024-07-24 19:21:16.268923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.290 [2024-07-24 19:21:16.268939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.290 [2024-07-24 19:21:16.273023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.290 [2024-07-24 19:21:16.282180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.290 [2024-07-24 19:21:16.282753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.290 [2024-07-24 19:21:16.282833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.290 [2024-07-24 19:21:16.282852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.290 [2024-07-24 19:21:16.283123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.290 [2024-07-24 19:21:16.283391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.290 [2024-07-24 19:21:16.283413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.290 [2024-07-24 19:21:16.283428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.290 [2024-07-24 19:21:16.287507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.290 [2024-07-24 19:21:16.296634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.290 [2024-07-24 19:21:16.297232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.290 [2024-07-24 19:21:16.297274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.290 [2024-07-24 19:21:16.297292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.290 [2024-07-24 19:21:16.297593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.290 [2024-07-24 19:21:16.297894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.290 [2024-07-24 19:21:16.297920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.290 [2024-07-24 19:21:16.297935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.550 [2024-07-24 19:21:16.302169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.550 [2024-07-24 19:21:16.311213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.550 [2024-07-24 19:21:16.311695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.550 [2024-07-24 19:21:16.311742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.550 [2024-07-24 19:21:16.311761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.550 [2024-07-24 19:21:16.312025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.550 [2024-07-24 19:21:16.312292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.550 [2024-07-24 19:21:16.312314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.550 [2024-07-24 19:21:16.312329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2641300 Killed "${NVMF_APP[@]}" "$@" 00:24:10.550 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:10.550 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:10.550 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:10.550 [2024-07-24 19:21:16.316398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.550 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:10.550 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:10.550 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2642420 00:24:10.550 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:10.550 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2642420 00:24:10.550 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2642420 ']' 00:24:10.550 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.550 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:10.550 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.550 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:10.551 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:10.551 [2024-07-24 19:21:16.325717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.551 [2024-07-24 19:21:16.326127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.551 [2024-07-24 19:21:16.326156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.551 [2024-07-24 19:21:16.326173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.551 [2024-07-24 19:21:16.326437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.551 [2024-07-24 19:21:16.326714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.551 [2024-07-24 19:21:16.326737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.551 [2024-07-24 19:21:16.326752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.551 [2024-07-24 19:21:16.330802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.551 [2024-07-24 19:21:16.340121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.551 [2024-07-24 19:21:16.340535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.551 [2024-07-24 19:21:16.340567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.551 [2024-07-24 19:21:16.340585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.551 [2024-07-24 19:21:16.340856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.551 [2024-07-24 19:21:16.341123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.551 [2024-07-24 19:21:16.341145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.551 [2024-07-24 19:21:16.341167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.551 [2024-07-24 19:21:16.345216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.551 [2024-07-24 19:21:16.354564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.551 [2024-07-24 19:21:16.354970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.551 [2024-07-24 19:21:16.354999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.551 [2024-07-24 19:21:16.355018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.551 [2024-07-24 19:21:16.355282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.551 [2024-07-24 19:21:16.355559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.551 [2024-07-24 19:21:16.355582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.551 [2024-07-24 19:21:16.355597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.551 [2024-07-24 19:21:16.359635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.551 [2024-07-24 19:21:16.369044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.551 [2024-07-24 19:21:16.369449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.551 [2024-07-24 19:21:16.369507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.551 [2024-07-24 19:21:16.369528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.551 [2024-07-24 19:21:16.369793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.551 [2024-07-24 19:21:16.370066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.551 [2024-07-24 19:21:16.370088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.551 [2024-07-24 19:21:16.370103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.551 [2024-07-24 19:21:16.373512] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:24:10.551 [2024-07-24 19:21:16.373606] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.551 [2024-07-24 19:21:16.374147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.551 [2024-07-24 19:21:16.383445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.551 [2024-07-24 19:21:16.383917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.551 [2024-07-24 19:21:16.383963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.551 [2024-07-24 19:21:16.383982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.551 [2024-07-24 19:21:16.384254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.551 [2024-07-24 19:21:16.384542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.551 [2024-07-24 19:21:16.384565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.551 [2024-07-24 19:21:16.384581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.551 [2024-07-24 19:21:16.388639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.551 [2024-07-24 19:21:16.397946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.551 [2024-07-24 19:21:16.398435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.551 [2024-07-24 19:21:16.398468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.551 [2024-07-24 19:21:16.398496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.551 [2024-07-24 19:21:16.398766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.551 [2024-07-24 19:21:16.399035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.551 [2024-07-24 19:21:16.399057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.551 [2024-07-24 19:21:16.399073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.551 [2024-07-24 19:21:16.403140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.551 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.551 [2024-07-24 19:21:16.412461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.551 [2024-07-24 19:21:16.412928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.551 [2024-07-24 19:21:16.412971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.551 [2024-07-24 19:21:16.412990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.551 [2024-07-24 19:21:16.413262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.551 [2024-07-24 19:21:16.413549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.551 [2024-07-24 19:21:16.413572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.551 [2024-07-24 19:21:16.413588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.551 [2024-07-24 19:21:16.417626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.551 [2024-07-24 19:21:16.426913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.551 [2024-07-24 19:21:16.427338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.551 [2024-07-24 19:21:16.427369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.551 [2024-07-24 19:21:16.427386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.551 [2024-07-24 19:21:16.427661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.551 [2024-07-24 19:21:16.427929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.551 [2024-07-24 19:21:16.427951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.551 [2024-07-24 19:21:16.427966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.551 [2024-07-24 19:21:16.432003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.551 [2024-07-24 19:21:16.441323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.551 [2024-07-24 19:21:16.441627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:10.551 [2024-07-24 19:21:16.441761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.551 [2024-07-24 19:21:16.441792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.552 [2024-07-24 19:21:16.441810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.552 [2024-07-24 19:21:16.442074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.552 [2024-07-24 19:21:16.442342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.552 [2024-07-24 19:21:16.442364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.552 [2024-07-24 19:21:16.442379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.552 [2024-07-24 19:21:16.446472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.552 [2024-07-24 19:21:16.455945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.552 [2024-07-24 19:21:16.456524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.552 [2024-07-24 19:21:16.456566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.552 [2024-07-24 19:21:16.456587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.552 [2024-07-24 19:21:16.456860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.552 [2024-07-24 19:21:16.457133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.552 [2024-07-24 19:21:16.457156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.552 [2024-07-24 19:21:16.457174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.552 [2024-07-24 19:21:16.461216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.552 [2024-07-24 19:21:16.470547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.552 [2024-07-24 19:21:16.471076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.552 [2024-07-24 19:21:16.471117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.552 [2024-07-24 19:21:16.471137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.552 [2024-07-24 19:21:16.471409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.552 [2024-07-24 19:21:16.471689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.552 [2024-07-24 19:21:16.471713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.552 [2024-07-24 19:21:16.471731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.552 [2024-07-24 19:21:16.475772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.552 [2024-07-24 19:21:16.485076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.552 [2024-07-24 19:21:16.485583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.552 [2024-07-24 19:21:16.485625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.552 [2024-07-24 19:21:16.485646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.552 [2024-07-24 19:21:16.485920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.552 [2024-07-24 19:21:16.486202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.552 [2024-07-24 19:21:16.486225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.552 [2024-07-24 19:21:16.486242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.552 [2024-07-24 19:21:16.490312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.552 [2024-07-24 19:21:16.499629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.552 [2024-07-24 19:21:16.500204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.552 [2024-07-24 19:21:16.500259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.552 [2024-07-24 19:21:16.500281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.552 [2024-07-24 19:21:16.500574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.552 [2024-07-24 19:21:16.500850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.552 [2024-07-24 19:21:16.500874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.552 [2024-07-24 19:21:16.500893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.552 [2024-07-24 19:21:16.505014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.552 [2024-07-24 19:21:16.514208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.552 [2024-07-24 19:21:16.514853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.552 [2024-07-24 19:21:16.514911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.552 [2024-07-24 19:21:16.514934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.552 [2024-07-24 19:21:16.515216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.552 [2024-07-24 19:21:16.515499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.552 [2024-07-24 19:21:16.515523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.552 [2024-07-24 19:21:16.515543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.552 [2024-07-24 19:21:16.519587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.552 [2024-07-24 19:21:16.528660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.552 [2024-07-24 19:21:16.529200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.552 [2024-07-24 19:21:16.529252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.552 [2024-07-24 19:21:16.529272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.552 [2024-07-24 19:21:16.529561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.552 [2024-07-24 19:21:16.529833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.552 [2024-07-24 19:21:16.529856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.552 [2024-07-24 19:21:16.529874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.552 [2024-07-24 19:21:16.533924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.552 [2024-07-24 19:21:16.543016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.552 [2024-07-24 19:21:16.543511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.552 [2024-07-24 19:21:16.543550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.552 [2024-07-24 19:21:16.543570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.552 [2024-07-24 19:21:16.543840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.552 [2024-07-24 19:21:16.544110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.552 [2024-07-24 19:21:16.544133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.552 [2024-07-24 19:21:16.544150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.552 [2024-07-24 19:21:16.548196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.552 [2024-07-24 19:21:16.557522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.552 [2024-07-24 19:21:16.557954] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.552 [2024-07-24 19:21:16.557991] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.552 [2024-07-24 19:21:16.558006] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.552 [2024-07-24 19:21:16.558020] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.552 [2024-07-24 19:21:16.558032] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.552 [2024-07-24 19:21:16.558057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.552 [2024-07-24 19:21:16.558096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.552 [2024-07-24 19:21:16.558117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.552 [2024-07-24 19:21:16.558115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.552 [2024-07-24 19:21:16.558388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.552 [2024-07-24 19:21:16.558685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.552 [2024-07-24 19:21:16.558718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.552 [2024-07-24 19:21:16.558746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.552 [2024-07-24 19:21:16.559501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:10.553 [2024-07-24 19:21:16.559544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.553 [2024-07-24 19:21:16.562978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.812 [2024-07-24 19:21:16.572205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.812 [2024-07-24 19:21:16.572809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.812 [2024-07-24 19:21:16.572854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.812 [2024-07-24 19:21:16.572876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.812 [2024-07-24 19:21:16.573152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.812 [2024-07-24 19:21:16.573441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.812 [2024-07-24 19:21:16.573464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.812 [2024-07-24 19:21:16.573493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.812 [2024-07-24 19:21:16.577608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.812 [2024-07-24 19:21:16.586847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.812 [2024-07-24 19:21:16.587452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.812 [2024-07-24 19:21:16.587505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.812 [2024-07-24 19:21:16.587529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.812 [2024-07-24 19:21:16.587806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.812 [2024-07-24 19:21:16.588082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.812 [2024-07-24 19:21:16.588104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.812 [2024-07-24 19:21:16.588124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.812 [2024-07-24 19:21:16.592224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.812 [2024-07-24 19:21:16.601473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.812 [2024-07-24 19:21:16.602048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.812 [2024-07-24 19:21:16.602092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.812 [2024-07-24 19:21:16.602113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.812 [2024-07-24 19:21:16.602391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.812 [2024-07-24 19:21:16.602681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.812 [2024-07-24 19:21:16.602705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.812 [2024-07-24 19:21:16.602724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.812 [2024-07-24 19:21:16.606779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.812 [2024-07-24 19:21:16.616158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.812 [2024-07-24 19:21:16.616798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.812 [2024-07-24 19:21:16.616856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.812 [2024-07-24 19:21:16.616879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.812 [2024-07-24 19:21:16.617168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.812 [2024-07-24 19:21:16.617451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.812 [2024-07-24 19:21:16.617488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.813 [2024-07-24 19:21:16.617511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.813 [2024-07-24 19:21:16.621692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.813 [2024-07-24 19:21:16.630813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.813 [2024-07-24 19:21:16.631317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.813 [2024-07-24 19:21:16.631359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.813 [2024-07-24 19:21:16.631381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.813 [2024-07-24 19:21:16.631663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.813 [2024-07-24 19:21:16.631936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.813 [2024-07-24 19:21:16.631959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.813 [2024-07-24 19:21:16.631977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.813 [2024-07-24 19:21:16.636016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.813 [2024-07-24 19:21:16.645330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.813 [2024-07-24 19:21:16.645767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.813 [2024-07-24 19:21:16.645798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.813 [2024-07-24 19:21:16.645816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.813 [2024-07-24 19:21:16.646081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.813 [2024-07-24 19:21:16.646348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.813 [2024-07-24 19:21:16.646371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.813 [2024-07-24 19:21:16.646386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.813 [2024-07-24 19:21:16.650428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.813 [2024-07-24 19:21:16.659751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.813 [2024-07-24 19:21:16.660171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.813 [2024-07-24 19:21:16.660203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.813 [2024-07-24 19:21:16.660221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.813 [2024-07-24 19:21:16.660498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.813 [2024-07-24 19:21:16.660767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.813 [2024-07-24 19:21:16.660795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.813 [2024-07-24 19:21:16.660810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.813 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:10.813 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:24:10.813 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:10.813 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:10.813 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:10.813 [2024-07-24 19:21:16.664868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.813 [2024-07-24 19:21:16.674173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.813 [2024-07-24 19:21:16.674591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.813 [2024-07-24 19:21:16.674621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.813 [2024-07-24 19:21:16.674639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.813 [2024-07-24 19:21:16.674904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.813 [2024-07-24 19:21:16.675171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.813 [2024-07-24 19:21:16.675194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.813 [2024-07-24 19:21:16.675209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.813 [2024-07-24 19:21:16.679247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.813 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.813 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:10.813 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.813 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:10.813 [2024-07-24 19:21:16.687398] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.813 [2024-07-24 19:21:16.688554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.813 [2024-07-24 19:21:16.688946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.813 [2024-07-24 19:21:16.688975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.813 [2024-07-24 19:21:16.688992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.813 [2024-07-24 19:21:16.689255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.813 [2024-07-24 19:21:16.689544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.813 [2024-07-24 19:21:16.689567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.813 [2024-07-24 19:21:16.689583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.813 [2024-07-24 19:21:16.693626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.813 [2024-07-24 19:21:16.702919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.813 [2024-07-24 19:21:16.703346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.813 [2024-07-24 19:21:16.703377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.813 [2024-07-24 19:21:16.703394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.813 [2024-07-24 19:21:16.703667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.813 [2024-07-24 19:21:16.703934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.813 [2024-07-24 19:21:16.703963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.813 [2024-07-24 19:21:16.703984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.813 [2024-07-24 19:21:16.708048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.813 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.813 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:10.813 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.813 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:10.813 [2024-07-24 19:21:16.717377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.813 [2024-07-24 19:21:16.717915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.813 [2024-07-24 19:21:16.717954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.813 [2024-07-24 19:21:16.717974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.813 [2024-07-24 19:21:16.718249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.813 [2024-07-24 19:21:16.718531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.813 [2024-07-24 19:21:16.718554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.813 [2024-07-24 19:21:16.718572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.813 [2024-07-24 19:21:16.722658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.813 Malloc0 00:24:10.813 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.813 [2024-07-24 19:21:16.731834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.814 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:10.814 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.814 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:10.814 [2024-07-24 19:21:16.732412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.814 [2024-07-24 19:21:16.732448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.814 [2024-07-24 19:21:16.732469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.814 [2024-07-24 19:21:16.732749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.814 [2024-07-24 19:21:16.733021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.814 [2024-07-24 19:21:16.733044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.814 [2024-07-24 19:21:16.733062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.814 [2024-07-24 19:21:16.737102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.814 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.814 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:10.814 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.814 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:10.814 [2024-07-24 19:21:16.746198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.814 [2024-07-24 19:21:16.746592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.814 [2024-07-24 19:21:16.746622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a8d0 with addr=10.0.0.2, port=4420 00:24:10.814 [2024-07-24 19:21:16.746639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a8d0 is same with the state(5) to be set 00:24:10.814 [2024-07-24 19:21:16.746910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a8d0 (9): Bad file descriptor 00:24:10.814 [2024-07-24 19:21:16.747177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.814 [2024-07-24 19:21:16.747200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.814 [2024-07-24 19:21:16.747215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.814 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.814 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:10.814 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.814 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:10.814 [2024-07-24 19:21:16.751262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.814 [2024-07-24 19:21:16.751259] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.814 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.814 19:21:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2641909 00:24:10.814 [2024-07-24 19:21:16.760570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.814 [2024-07-24 19:21:16.803192] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:20.784 00:24:20.784 Latency(us) 00:24:20.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.784 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:20.784 Verification LBA range: start 0x0 length 0x4000 00:24:20.784 Nvme1n1 : 15.01 5800.11 22.66 7419.59 0.00 9652.29 652.33 20097.71 00:24:20.784 =================================================================================================================== 00:24:20.784 Total : 5800.11 22.66 7419.59 0.00 9652.29 652.33 20097.71 00:24:20.784 19:21:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:20.784 19:21:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:20.784 19:21:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.784 19:21:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:20.784 19:21:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.784 19:21:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:20.784 19:21:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:20.784 19:21:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:20.784 19:21:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:20.784 19:21:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:20.784 19:21:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:20.784 19:21:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:20.784 19:21:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:20.784 rmmod nvme_tcp 00:24:20.784 rmmod nvme_fabrics 00:24:20.784 rmmod nvme_keyring 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2642420 ']' 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2642420 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 2642420 ']' 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 2642420 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2642420 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2642420' 00:24:20.784 killing process with pid 2642420 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 2642420 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 2642420 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.784 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.692 19:21:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:22.692 00:24:22.692 real 0m21.885s 00:24:22.692 user 0m58.533s 00:24:22.692 sys 0m4.181s 00:24:22.692 19:21:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:22.692 19:21:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:22.692 ************************************ 00:24:22.692 END TEST nvmf_bdevperf 00:24:22.692 ************************************ 00:24:22.692 19:21:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:22.692 19:21:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:22.692 19:21:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:22.692 19:21:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.692 ************************************ 00:24:22.692 START TEST nvmf_target_disconnect 00:24:22.692 ************************************ 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:22.693 * Looking for test storage... 00:24:22.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:22.693 19:21:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:24:24.599 Found 0000:08:00.0 (0x8086 - 0x159b) 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:24:24.599 Found 0000:08:00.1 (0x8086 - 0x159b) 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:24:24.599 Found net devices under 0000:08:00.0: cvl_0_0 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:24:24.599 Found net devices under 0000:08:00.1: cvl_0_1 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.599 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:24.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:24:24.600 00:24:24.600 --- 10.0.0.2 ping statistics --- 00:24:24.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.600 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:24.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:24:24.600 00:24:24.600 --- 10.0.0.1 ping statistics --- 00:24:24.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.600 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:24.600 ************************************ 00:24:24.600 START TEST nvmf_target_disconnect_tc1 00:24:24.600 ************************************ 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:24.600 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.600 [2024-07-24 19:21:30.380594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.600 [2024-07-24 19:21:30.380691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a40c0 with addr=10.0.0.2, port=4420 00:24:24.600 [2024-07-24 19:21:30.380731] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:24.600 [2024-07-24 19:21:30.380759] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:24.600 [2024-07-24 19:21:30.380774] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:24.600 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:24.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:24.600 Initializing NVMe Controllers 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:24.600 00:24:24.600 real 0m0.094s 00:24:24.600 user 0m0.040s 00:24:24.600 sys 0m0.053s 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:24.600 ************************************ 00:24:24.600 END TEST nvmf_target_disconnect_tc1 00:24:24.600 ************************************ 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:24.600 ************************************ 00:24:24.600 START TEST nvmf_target_disconnect_tc2 00:24:24.600 ************************************ 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2644842 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2644842 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2644842 ']' 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:24.600 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.600 [2024-07-24 19:21:30.498760] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:24:24.600 [2024-07-24 19:21:30.498859] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.600 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.600 [2024-07-24 19:21:30.565021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:24.859 [2024-07-24 19:21:30.683233] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.859 [2024-07-24 19:21:30.683294] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.860 [2024-07-24 19:21:30.683310] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.860 [2024-07-24 19:21:30.683323] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.860 [2024-07-24 19:21:30.683335] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.860 [2024-07-24 19:21:30.683395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:24.860 [2024-07-24 19:21:30.683449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:24.860 [2024-07-24 19:21:30.683515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:24.860 [2024-07-24 19:21:30.683523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.860 Malloc0 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.860 [2024-07-24 19:21:30.847206] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.860 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:25.120 [2024-07-24 19:21:30.875469] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.120 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.120 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:25.120 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.120 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:25.120 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.120 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2644955 00:24:25.120 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:25.120 19:21:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:25.120 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.039 19:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2644842 00:24:27.039 19:21:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 [2024-07-24 19:21:32.901663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 [2024-07-24 19:21:32.902058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Write completed with error (sct=0, sc=8) 00:24:27.039 starting I/O failed 00:24:27.039 Read completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Read completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Write completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Read completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Read completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 [2024-07-24 19:21:32.902400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.040 [2024-07-24 19:21:32.902580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.902618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.902755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.902801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.902898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.902927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.903075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.903103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.903247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.903289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.903456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.903488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.903676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.903731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.903892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.903935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.904140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.904188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.904308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.904334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.904451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.904499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.904739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.904788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.904994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.905044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.905238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.905291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.905499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.905551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.905712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.905753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.905984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.906010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.906156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.906186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.906304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.906344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.906499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.906579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.906679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.906706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.906905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.906931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.907032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.907057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.907177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.907238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.907357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.907384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.907529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.907570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 [2024-07-24 19:21:32.907732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.040 [2024-07-24 19:21:32.907785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.040 qpair failed and we were unable to recover it. 00:24:27.040 Read completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Read completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Write completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Write completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Read completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Read completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Read completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Write completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Write completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Read completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Write completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Read completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Write completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Write completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Write completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Read completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Read completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Write completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Read completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Read completed with error (sct=0, sc=8) 00:24:27.040 starting I/O failed 00:24:27.040 Read completed with error (sct=0, sc=8) 00:24:27.041 starting I/O failed 00:24:27.041 Read completed with error (sct=0, sc=8) 00:24:27.041 starting I/O failed 00:24:27.041 Read completed with error (sct=0, sc=8) 00:24:27.041 starting I/O failed 00:24:27.041 Read completed with error (sct=0, sc=8) 00:24:27.041 starting I/O failed 00:24:27.041 Write completed with error (sct=0, sc=8) 00:24:27.041 starting I/O failed 00:24:27.041 Read completed with error (sct=0, sc=8) 00:24:27.041 starting I/O failed 00:24:27.041 Read completed with error (sct=0, sc=8) 00:24:27.041 starting I/O failed 00:24:27.041 Read completed with error (sct=0, sc=8) 00:24:27.041 starting I/O failed 00:24:27.041 Write completed with error (sct=0, sc=8) 00:24:27.041 starting I/O failed 00:24:27.041 Read completed with error (sct=0, sc=8) 00:24:27.041 starting I/O failed 00:24:27.041 Read completed with error (sct=0, sc=8) 00:24:27.041 starting I/O failed 00:24:27.041 Write completed with error (sct=0, sc=8) 00:24:27.041 starting I/O failed 00:24:27.041 [2024-07-24 19:21:32.908145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:27.041 [2024-07-24 19:21:32.908319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.908361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.908565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.908617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.908823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.908874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.909060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.909108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.909286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.909311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.909505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.909556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.909697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.909723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.909916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.909941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.910040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.910066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.910210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.910266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.910401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.910433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.910551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.910579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.910690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.910722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.910900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.910955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.911140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.911166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.911356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.911407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.911621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.911677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.911903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.911932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.912075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.912129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.912230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.912256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.912359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.912386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.912511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.912555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.912674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.912700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.912911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.912937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.913076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.913103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.913249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.913298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.913396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.913423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.913596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.913622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.913787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.913836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.913965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.913992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.914148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.041 [2024-07-24 19:21:32.914174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.041 qpair failed and we were unable to recover it. 00:24:27.041 [2024-07-24 19:21:32.914343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.914407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.914552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.914580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.914759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.914808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.914933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.914989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.915092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.915119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.915224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.915250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.915413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.915469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.915727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.915754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.915942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.915989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.916138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.916189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.916298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.916325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.916426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.916452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.916682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.916731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.916864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.916890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.916983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.917009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.917194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.917219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.917397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.917448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.917684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.917734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.917910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.917967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.918067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.918098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.918237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.918277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.918402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.918458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.918638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.918675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.918848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.918906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.919122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.919171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.919316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.919357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.919586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.919613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.919788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.919814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.920004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.920062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.920185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.920252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.920432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.920459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.920608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.920634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.920829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.920855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.921009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.921060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.921256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.921304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.042 [2024-07-24 19:21:32.921406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.042 [2024-07-24 19:21:32.921434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.042 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.921602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.921658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.921838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.921889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.922008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.922035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.922167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.922194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.922314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.922341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.922492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.922546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.922792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.922841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.923005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.923062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.923185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.923239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.923342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.923369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.923538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.923625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.923794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.923838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.923986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.924038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.924235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.924262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.924428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.924492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.924668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.924717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.924855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.924921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.925068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.925120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.925223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.925249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.925407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.925432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.925623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.925673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.925856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.925906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.926058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.926085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.926291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.926317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.926443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.926493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.926654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.926720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.926897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.926946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.927129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.927182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.927279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.927304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.927419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.927477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.927621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.927652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.927847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.927873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.927998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.928024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.928166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.928217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.043 qpair failed and we were unable to recover it. 00:24:27.043 [2024-07-24 19:21:32.928356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.043 [2024-07-24 19:21:32.928406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.928532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.928585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.928688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.928714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.928898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.928956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.929073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.929129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.929285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.929311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.929443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.929470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.929696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.929722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.929860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.929915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.930008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.930034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.930196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.930221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.930416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.930472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.930643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.930670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.930866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.930923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.931094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.931120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.931232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.931258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.931450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.931509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.931623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.931649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.931816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.931842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.932047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.932073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.932186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.932211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.932352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.932411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.932556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.932608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.932820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.932879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.933013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.933065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.933217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.933265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.933397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.933448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.933647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.933674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.933859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.933911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.934116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.044 [2024-07-24 19:21:32.934142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.044 qpair failed and we were unable to recover it. 00:24:27.044 [2024-07-24 19:21:32.934269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.934321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.934422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.934448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.934659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.934715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.934924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.934973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.935143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.935191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.935388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.935414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.935533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.935560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.935728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.935754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.935904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.935960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.936134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.936179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.936379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.936406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.936602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.936629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.936791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.936844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.937018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.937045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.937246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.937272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.937427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.937485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.937641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.937695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.937891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.937918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.938135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.938184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.938350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.938402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.938500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.938529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.938729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.938787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.938920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.938972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.939140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.939189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.939376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.939402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.939546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.939593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.939790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.939848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.939984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.940037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.940219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.940268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.940412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.940465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.940639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.940666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.940871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.940922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.941055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.941111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.941262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.941314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.941533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.941561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.941728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.045 [2024-07-24 19:21:32.941755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.045 qpair failed and we were unable to recover it. 00:24:27.045 [2024-07-24 19:21:32.941866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.941898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.942104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.942153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.942292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.942344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.942540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.942567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.942672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.942697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.942805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.942871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.943068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.943119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.943210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.943289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.943440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.943470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.943592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.943619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.943723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.943749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.943882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.943909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.944019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.944047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.944141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.944167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.944297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.944323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.944446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.944472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.944683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.944709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.944857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.944911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.945074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.945101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.945248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.945301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.945427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.945517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.945731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.945757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.945874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.945915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.946097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.946144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.946297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.946323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.946453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.946532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.946648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.946675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.946867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.946919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.947119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.947167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.947304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.947362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.947469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.947540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.947720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.947746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.947910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.947951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.948064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.948090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.948198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.948224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.046 [2024-07-24 19:21:32.948408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.046 [2024-07-24 19:21:32.948459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.046 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.948629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.948681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.948791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.948818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.948974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.949001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.949140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.949185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.949383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.949410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.949594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.949621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.949811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.949864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.950062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.950110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.950315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.950367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.950511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.950569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.950731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.950782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.950921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.950973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.951121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.951173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.951273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.951299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.951444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.951513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.951624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.951651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.951765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.951811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.951993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.952046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.952225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.952275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.952461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.952518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.952681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.952707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.952852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.952902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.953016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.953042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.953172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.953229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.953322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.953347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.953474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.953544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.953717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.953743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.953937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.953989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.954150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.954206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.954319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.954357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.954544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.954596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.954755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.954783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.954945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.954999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.955216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.955266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.955367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.955399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.047 qpair failed and we were unable to recover it. 00:24:27.047 [2024-07-24 19:21:32.955502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.047 [2024-07-24 19:21:32.955529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.955654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.955709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.955836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.955895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.955989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.956014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.956174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.956200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.956350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.956403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.956501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.956528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.956670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.956717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.956856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.956908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.957016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.957042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.957212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.957238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.957338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.957364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.957539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.957566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.957688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.957737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.957951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.958001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.958100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.958126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.958257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.958309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.958434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.958460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.958598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.958651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.958773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.958800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.958944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.958997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.959155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.959182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.959308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.959360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.959498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.959552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.959736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.959793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.959885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.959964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.960151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.960202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.960416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.960465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.960583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.960610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.960704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.960730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.960904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.960949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.961112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.048 [2024-07-24 19:21:32.961165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.048 qpair failed and we were unable to recover it. 00:24:27.048 [2024-07-24 19:21:32.961317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.961371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.961578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.961630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.961788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.961815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.961993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.962019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.962203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.962255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.962367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.962393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.962548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.962602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.962697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.962728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.962936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.962984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.963128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.963179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.963277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.963304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.963432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.963471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.963644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.963698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.963891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.963942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.964148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.964198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.964320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.964374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.964473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.964508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.964612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.964640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.964744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.964770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.964952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.965006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.965125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.965184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.965287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.965314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.965474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.965530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.965689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.965741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.965946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.965995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.966088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.966114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.966271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.966324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.966423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.966449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.966647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.966701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.966855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.966909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.967027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.967083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.967187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.967213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.967332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.967390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.967511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.967561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.967720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.049 [2024-07-24 19:21:32.967751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.049 qpair failed and we were unable to recover it. 00:24:27.049 [2024-07-24 19:21:32.967923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.967975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.968172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.968221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.968387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.968413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.968534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.968596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.968764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.968819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.968912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.968990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.969083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.969107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.969238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.969290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.969389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.969415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.969604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.969658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.969836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.969887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.970084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.970111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.970262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.970316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.970430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.970454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.970618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.970673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.970809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.970859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.971034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.971083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.971244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.971293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.971411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.971471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.971655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.971681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.971826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.971889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.972108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.972168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.972300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.972346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.972464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.972529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.972655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.972711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.972899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.972951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.973129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.973184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.973289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.973315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.973449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.973474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.973578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.973603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.973770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.973830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.973930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.973957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.974093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.974145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.974255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.974280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.974411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.974461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.974581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.974606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.050 [2024-07-24 19:21:32.974795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.050 [2024-07-24 19:21:32.974846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.050 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.975010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.975066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.975181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.975207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.975411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.975461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.975671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.975766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.975997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.976025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.976316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.976388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.976610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.976683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.976860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.976914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.977159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.977202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.977440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.977524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.977746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.977772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.978043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.978116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.978432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.978521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.978699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.978755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.978935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.979007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.979186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.979225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.979408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.979464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.979683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.979751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.979913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.979966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.980144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.980201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.980528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.980584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.980816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.980844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.980992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.981018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.981140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.981194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.981323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.981377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.981512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.981568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.981664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.981689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.981799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.981824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.981938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.981993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.982157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.982213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.982400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.982451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.982594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.982649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.982747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.982774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.982963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.982990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.983122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.051 [2024-07-24 19:21:32.983202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.051 qpair failed and we were unable to recover it. 00:24:27.051 [2024-07-24 19:21:32.983348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.983396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.983555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.983608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.983790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.983844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.984009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.984064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.984210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.984262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.984552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.984579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.984879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.984935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.985199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.985267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.985508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.985589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.985863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.985940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.986238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.986307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.986597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.986624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.986794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.986848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.987134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.987160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.987441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.987528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.987697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.987769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.988058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.988084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.988368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.988394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.988569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.988626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.988847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.988873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.989071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.989161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.989341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.989399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.989522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.989550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.989772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.989820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.989962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.990014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.990107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.990133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.990284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.990312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.990555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.990626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.990797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.990852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.991115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.991183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.991500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.991545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.991691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.991737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.991912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.991968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.992251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.992277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.992559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.992585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.052 [2024-07-24 19:21:32.992738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.052 [2024-07-24 19:21:32.992767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.052 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.992894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.992946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.993077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.993129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.993227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.993252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.993399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.993463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.993583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.993609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.993778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.993855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.994149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.994175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.994357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.994412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.994592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.994647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.994831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.994886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.995061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.995114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.995297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.995351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.995584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.995656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.995894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.995920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.996103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.996128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.996398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.996475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.996691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.996735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.997002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.997073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.997247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.997300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.997519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.997545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.997767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.997835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.998074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.998144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.998431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.998457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.998686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.998714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.998907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.998957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.999158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.999212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.999413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.999440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.999593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.999637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:32.999799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:32.999826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:33.000038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.053 [2024-07-24 19:21:33.000087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.053 qpair failed and we were unable to recover it. 00:24:27.053 [2024-07-24 19:21:33.000221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.000301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.000447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.000473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.000578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.000605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.000826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.000879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.001007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.001060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.001234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.001263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.001422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.001478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.001686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.001742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.001975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.002045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.002229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.002296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.002448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.002477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.002642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.002669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.002766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.002792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.002887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.002912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.003011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.003037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.003239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.003266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.003402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.003445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.003599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.003656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.003800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.003824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.004021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.004092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.004307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.004356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.004520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.004547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.004700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.004741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.004925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.004987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.005184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.005211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.005514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.005554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.005734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.005788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.005953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.005979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.006163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.006232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.006417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.006443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.006669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.006734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.006918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.006986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.007170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.007234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.007528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.007554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.007731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.007784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.054 [2024-07-24 19:21:33.008104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.054 [2024-07-24 19:21:33.008174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.054 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.008361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.008436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.008651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.008718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.008896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.008960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.009118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.009160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.009404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.009473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.009670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.009710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.009971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.009996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.010158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.010223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.010490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.010518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.010783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.010858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.011025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.011079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.011345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.011415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.011609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.011676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.011844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.011907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.012135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.012205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.012386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.012455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.012720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.012791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.013108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.013173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.013362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.013429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.013737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.013807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.013964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.014020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.014295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.014371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.014618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.014692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.014857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.014920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.015132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.015201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.015539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.015594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.015779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.015844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.016044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.016113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.016357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.016432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.016623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.016688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.016922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.016947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.017218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.017291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.017466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.017528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.017687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.017737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.017940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.017966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.055 [2024-07-24 19:21:33.018120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.055 [2024-07-24 19:21:33.018147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.055 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.018414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.018441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.018703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.018775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.019079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.019153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.019322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.019386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.019587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.019659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.019846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.019901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.020059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.020116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.020420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.020446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.020688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.020739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.020878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.020934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.021065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.021092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.021344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.021396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.022130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.022161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.022396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.022423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.022559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.022588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.022822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.022886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.023100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.023149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.023332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.023360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.023554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.023581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.023762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.023813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.024029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.024076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.024222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.024286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.024475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.024536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.024665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.024690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.024822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.024848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.024986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.025012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.025156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.025182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.025323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.025348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.025477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.025508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.025642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.025671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.025783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.025809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.025923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.025949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.026042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.026068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.026185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.026211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.026315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.026341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.056 qpair failed and we were unable to recover it. 00:24:27.056 [2024-07-24 19:21:33.026465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.056 [2024-07-24 19:21:33.026497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.026624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.026649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.026759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.026787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.026910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.026938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.027060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.027100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.027213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.027243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.027396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.027424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.027543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.027573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.027673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.027699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.027817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.027849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.027991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.028018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.028153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.028179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.028291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.028317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.028454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.028487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.028621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.028647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.028771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.028797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.028891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.028917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.029032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.029058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.029176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.029203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.029302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.029330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.029435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.029460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.029581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.029608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.029709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.029736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.029847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.029874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.029999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.030026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.030119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.030146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.030245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.030272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.030385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.030410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.030536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.030562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.030683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.030709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.030817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.030846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.030951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.030980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.031084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.031110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.031224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.031251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.031344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.031370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.031468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.057 [2024-07-24 19:21:33.031501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.057 qpair failed and we were unable to recover it. 00:24:27.057 [2024-07-24 19:21:33.031627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.031653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.031756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.031784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.031908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.031934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.032043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.032070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.032185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.032213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.032323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.032349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.032441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.032467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.032595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.032622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.032736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.032762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.032874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.032901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.033007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.033033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.033131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.033157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.033255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.033281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.033382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.033413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.033529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.033556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.033653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.033679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.033781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.033807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.033920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.033947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.034055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.034089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.034183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.034209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.034311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.034337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.034439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.034466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.034570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.034596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.034688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.034714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.034822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.034847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.034955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.034984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.035076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.035102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.035200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.035226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.035320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.058 [2024-07-24 19:21:33.035348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.058 qpair failed and we were unable to recover it. 00:24:27.058 [2024-07-24 19:21:33.035452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.035486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.035585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.035610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.035708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.035734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.035832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.035858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.035978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.036015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.036140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.036178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.036326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.036366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.036498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.036540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.036649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.036675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.036779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.036806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.036917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.036943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.037046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.037075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.037190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.037216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.037320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.037346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.037443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.037469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.037593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.037620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.037716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.037742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.037863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.037890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.037991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.038016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.038108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.038134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.038245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.038281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.038391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.038427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.038539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.038566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.038667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.038693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.038799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.038829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.038924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.038949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.039045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.039071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.039169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.039196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.039292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.039323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.039442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.039468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.039595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.039621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.039721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.039748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.039848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.039881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.039986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.040013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.040132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.040171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.059 [2024-07-24 19:21:33.040287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.059 [2024-07-24 19:21:33.040316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.059 qpair failed and we were unable to recover it. 00:24:27.060 [2024-07-24 19:21:33.040411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.060 [2024-07-24 19:21:33.040437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.060 qpair failed and we were unable to recover it. 00:24:27.373 [2024-07-24 19:21:33.040565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.373 [2024-07-24 19:21:33.040594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.373 qpair failed and we were unable to recover it. 00:24:27.373 [2024-07-24 19:21:33.040704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.373 [2024-07-24 19:21:33.040732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.373 qpair failed and we were unable to recover it. 00:24:27.373 [2024-07-24 19:21:33.040837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.373 [2024-07-24 19:21:33.040863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.373 qpair failed and we were unable to recover it. 00:24:27.373 [2024-07-24 19:21:33.040973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.373 [2024-07-24 19:21:33.040999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.373 qpair failed and we were unable to recover it. 00:24:27.373 [2024-07-24 19:21:33.041093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.373 [2024-07-24 19:21:33.041119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.373 qpair failed and we were unable to recover it. 00:24:27.373 [2024-07-24 19:21:33.041235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.373 [2024-07-24 19:21:33.041261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.373 qpair failed and we were unable to recover it. 00:24:27.373 [2024-07-24 19:21:33.041383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.373 [2024-07-24 19:21:33.041410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.373 qpair failed and we were unable to recover it. 00:24:27.373 [2024-07-24 19:21:33.041507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.373 [2024-07-24 19:21:33.041545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.373 qpair failed and we were unable to recover it. 00:24:27.373 [2024-07-24 19:21:33.041655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.041681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.041792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.041819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.041920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.041946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.042047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.042073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.042169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.042195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.042316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.042347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.042454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.042486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.042591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.042617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.042730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.042759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.042860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.042888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.042997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.043024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.043120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.043148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.043251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.043277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.043374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.043402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.043513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.043540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.043651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.043678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.043779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.043805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.043929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.043955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.044062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.044089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.044188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.044218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.044331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.044358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.044459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.044499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.044612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.044638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.044756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.044784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.044901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.044927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.045028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.045056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.045157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.045184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.045303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.045330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.045431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.045457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.045583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.045610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.045708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.045734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.045833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.045858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.045956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.045984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.046088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.046115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.046238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.046264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.374 [2024-07-24 19:21:33.046366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.374 [2024-07-24 19:21:33.046392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.374 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.046504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.046540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.046647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.046673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.046781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.046807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.046909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.046935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.047060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.047085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.047221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.047247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.047347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.047373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.047489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.047515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.047614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.047641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.047743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.047769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.047891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.047929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.048078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.048108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.048211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.048239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.048368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.048395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.048504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.048534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.048637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.048664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.048763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.048790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.048916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.048943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.049076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.049103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.049230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.049256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.049357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.049383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.049494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.049521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.049623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.049650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.049780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.049811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.049911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.049938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.050038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.050063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.050193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.050220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.050325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.050354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.050460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.050494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.051326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.051358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.051486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.051515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.051621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.051649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.051761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.051787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.051887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.051913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.052013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.375 [2024-07-24 19:21:33.052041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.375 qpair failed and we were unable to recover it. 00:24:27.375 [2024-07-24 19:21:33.052141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.052168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.052268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.052295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.052403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.052429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.052572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.052600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.052705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.052731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.052829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.052856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.052986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.053014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.053122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.053151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.053273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.053299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.053395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.053420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.053527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.053553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.053666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.053693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.053794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.053821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.053939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.053964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.054059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.054085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.054209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.054244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.054351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.054379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.054499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.054531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.054643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.054669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.054764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.054790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.054907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.054935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.055050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.055080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.055179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.055205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.055314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.055340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.055439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.055466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.055567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.055593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.055706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.055733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.055842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.055869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.055970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.056001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.056120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.056147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.056246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.056272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.056380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.056406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.056504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.056532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.056632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.056658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.056756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.056782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.056911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.376 [2024-07-24 19:21:33.056937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.376 qpair failed and we were unable to recover it. 00:24:27.376 [2024-07-24 19:21:33.057035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.057062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.057166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.057192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.057290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.057316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.057412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.057439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.057570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.057598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.057718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.057744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.057851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.057882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.057983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.058010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.058102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.058128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.058241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.058267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.058392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.058418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.058519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.058547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.058647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.058674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.058770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.058796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.058912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.058938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.059037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.059063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.059159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.059186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.059277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.059304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.059399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.059428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.059577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.059605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.059709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.059736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.059827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.059853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.059950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.059976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.060082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.060107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.060215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.060241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.060339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.060365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.060460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.060495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.060606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.060634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.060754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.060781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.060875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.060901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.061002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.061028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.061121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.061148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.061251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.061283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.061377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.061403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.061514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.061542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.377 [2024-07-24 19:21:33.061639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.377 [2024-07-24 19:21:33.061664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.377 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.061761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.061787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.061898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.061923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.062019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.062046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.062144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.062171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.062264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.062290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.062386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.062414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.062524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.062554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.062650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.062676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.062776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.062803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.062911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.062938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.063047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.063075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.063178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.063204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.063303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.063332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.063441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.063467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.063575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.063602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.063696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.063723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.063814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.063840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.063946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.063973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.064071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.064099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.064202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.064229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.064329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.064356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.064454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.064487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.064596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.064623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.064738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.064774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.064882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.064909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.065012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.065039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.065142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.065169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.065260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.378 [2024-07-24 19:21:33.065286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.378 qpair failed and we were unable to recover it. 00:24:27.378 [2024-07-24 19:21:33.065379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.065405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.065521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.065548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.065653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.065680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.065792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.065818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.065930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.065959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.066059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.066087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.066190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.066216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.066314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.066341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.066434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.066466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.066586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.066613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.066707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.066733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.066824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.066850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.066956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.066982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.067092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.067118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.067222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.067248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.067345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.067372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.067506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.067534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.067633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.067661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.067782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.067808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.067910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.067936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.068029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.068055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.068150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.068176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.068278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.068305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.068401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.068427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.068528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.068556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.068711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.068775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.068892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.068953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.069089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.069143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.069278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.069326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.069427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.069453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.069574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.069600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.069718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.069773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.069866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.069892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.070018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.070072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.070192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.070237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.379 [2024-07-24 19:21:33.070406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.379 [2024-07-24 19:21:33.070433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.379 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.070558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.070618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.070751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.070801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.070934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.070961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.071080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.071130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.071245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.071291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.071391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.071418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.071509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.071536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.071665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.071723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.071854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.071899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.072006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.072032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.072210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.072244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.072334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.072360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.072471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.072547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.072642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.072668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.072799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.072826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.072943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.072989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.073099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.073154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.073254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.073280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.073385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.073411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.073505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.073533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.073658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.073698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.073809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.073869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.074001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.074027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.074138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.074187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.074328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.074375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.074490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.074545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.074670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.074720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.074870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.074896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.075003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.075064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.075182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.075228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.075364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.075417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.075522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.075550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.075666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.075711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.075835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.075890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.076026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.076079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.380 [2024-07-24 19:21:33.076172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.380 [2024-07-24 19:21:33.076199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.380 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.076319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.076365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.076493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.076548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.076642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.076669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.076784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.076840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.076936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.076963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.077791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.077823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.077956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.078009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.078156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.078224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.078331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.078357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.078490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.078544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.078666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.078733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.078876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.078960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.079093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.079144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.079277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.079363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.079494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.079539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.079685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.079735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.079855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.079901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.080020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.080068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.080187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.080233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.080352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.080405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.080504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.080531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.080658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.080711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.080815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.080842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.080959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.081005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.081100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.081126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.081238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.081264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.081409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.081438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.082234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.082265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.082365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.082392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.082517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.082545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.082698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.082749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.082872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.082917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.083028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.083076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.083177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.083203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.083319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.083367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.083493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.381 [2024-07-24 19:21:33.083539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.381 qpair failed and we were unable to recover it. 00:24:27.381 [2024-07-24 19:21:33.083672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.083711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.083820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.083880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.084001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.084047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.084143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.084171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.084290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.084340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.084506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.084570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.084725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.084789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.084917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.084972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.085124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.085152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.085315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.085346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.085471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.085509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.085615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.085642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.085743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.085770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.085873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.085900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.086052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.086103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.086231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.086279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.086372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.086398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.086560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.086613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.086742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.086783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.086910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.086952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.087084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.087145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.087296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.087323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.087452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.087544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.087672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.087713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.087863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.087917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.088036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.088098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.088301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.088349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.088489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.088531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.088677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.088720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.088818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.088845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.088973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.089014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.089140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.089221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.089361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.089387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.089541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.089577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.089707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.089765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.089887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.382 [2024-07-24 19:21:33.089931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.382 qpair failed and we were unable to recover it. 00:24:27.382 [2024-07-24 19:21:33.090055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.090096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.090252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.090295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.090404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.090430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.090537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.090565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.090663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.090690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.090797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.090829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.090941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.090968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.091134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.091182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.091295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.091344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.091495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.091546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.091653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.091681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.091779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.091805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.091924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.091950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.092052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.092078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.092183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.092210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.092394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.092425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.093191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.093227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.093330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.093358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.093472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.093537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.093669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.093723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.093857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.093883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.093978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.094004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.094099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.094125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.094234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.094277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.094389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.094414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.094550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.094606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.094746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.094803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.094932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.094961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.095085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.095135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.095278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd50190 is same with the state(5) to be set 00:24:27.383 [2024-07-24 19:21:33.095407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.095442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.383 [2024-07-24 19:21:33.095569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.383 [2024-07-24 19:21:33.095599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.383 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.095703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.095737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.095868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.095912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.096045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.096130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.096258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.096305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.096418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.096465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.096602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.096647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.096766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.096814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.096949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.096988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.097116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.097179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.097322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.097364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.097461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.097508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.097688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.097719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.097840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.097885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.098001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.098048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.098144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.098170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.098366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.098415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.098535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.098585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.098700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.098737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.098868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.098913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.099080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.099110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.099224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.099255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.099358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.099385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.099508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.099553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.099665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.099712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.099832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.099876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.100044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.100074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.100186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.100222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.100344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.100371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.100504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.100549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.100667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.100713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.100836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.100877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.100999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.101042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.101156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.101214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.101329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.101375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.101514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.101545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.101664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.384 [2024-07-24 19:21:33.101700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.384 qpair failed and we were unable to recover it. 00:24:27.384 [2024-07-24 19:21:33.101857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.101902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.102025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.102070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.102193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.102235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.102326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.102351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.102450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.102476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.102586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.102612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.102710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.102734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.102847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.102890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.103010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.103056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.103151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.103177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.103303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.103391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.103528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.103574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.103695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.103738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.103839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.103866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.103998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.104044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.104184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.104211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.104331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.104375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.104496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.104557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.104693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.104743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.104870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.104913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.105054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.105098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.105226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.105281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.105404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.105430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.105561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.105605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.105726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.105771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.105897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.105937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.106056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.106097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.106216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.106258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.106378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.106418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.106524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.106550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.106646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.106670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.106792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.106838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.106956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.106997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.107102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.107129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.107247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.107290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.107465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.385 [2024-07-24 19:21:33.107528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.385 qpair failed and we were unable to recover it. 00:24:27.385 [2024-07-24 19:21:33.107656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.107700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.107798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.107824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.107973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.108020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.108146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.108211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.108363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.108406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.108510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.108539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.108708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.108735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.108878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.108927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.109086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.109132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.109241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.109284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.109419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.109476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.109603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.109655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.109804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.109868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.110036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.110062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.110178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.110222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.110350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.110405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.110543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.110598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.110713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.110756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.110850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.110875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.110995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.111049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.111233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.111282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.111383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.111409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.111536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.111565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.111686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.111730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.111852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.111906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.112029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.112070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.112234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.112260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.112380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.112422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.112530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.112558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.112670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.112696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.112807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.112847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.112954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.112982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.113106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.113146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.113239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.113265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.113372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.113412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.113509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.113536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-24 19:21:33.113637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-24 19:21:33.113663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.113760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.113785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.113885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.113911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.114009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.114035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.114138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.114164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.114262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.114287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.114393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.114423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.114529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.114557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.114672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.114716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.114820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.114847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.114993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.115041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.115135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.115160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.115256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.115281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.115376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.115403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.115512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.115546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.115687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.115738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.115927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.115975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.116108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.116151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.116269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.116311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.116441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.116507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.116648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.116708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.116835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.116875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.116987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.117030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.117158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.117215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.117347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.117393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.117504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.117532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.117697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.117739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.117856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.117885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.118013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.118052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.118174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.118222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.118353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.118393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.118501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.118529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.118653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.118694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.118880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.118927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.119039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.119086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.119249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.119276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-24 19:21:33.119405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-24 19:21:33.119445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.119573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.119617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.119737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.119780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.119903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.119944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.120062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.120106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.120238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.120286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.120411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.120461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.120570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.120597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.120724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.120772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.120893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.120934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.121065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.121105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.121266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.121311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.121509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.121556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.121664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.121692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.121825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.121879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.121998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.122024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.122181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.122206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.122310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.122340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.122477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.122529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.122653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.122721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.122853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.122897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.123043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.123090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.123249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.123279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.123389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.123418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.123541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.123577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.123736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.123798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.123959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.124013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.124212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.124271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.124470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.124534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.124664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.124721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.124904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.124963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-24 19:21:33.125115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-24 19:21:33.125185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.125384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.125446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.125661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.125709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.125855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.125887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.126004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.126030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.126156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.126195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.126290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.126316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.126435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.126463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.126581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.126607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.126712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.126741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.126868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.126912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.127027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.127070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.127217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.127281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.127460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.127536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.127723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.127783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.128016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.128071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.128262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.128318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.128503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.128561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.128736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.128795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.128985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.129043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.129241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.129267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.129443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.129494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.129615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.129655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.129760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.129789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.129908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.129953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.130052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.130078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.130233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.130279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.130408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.130497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.130625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.130666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.130831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.130861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.131022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.131085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.131261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.131320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.131443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.131492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.131615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.131663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.131790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.131844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.131976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.132057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-24 19:21:33.132206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-24 19:21:33.132243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.132386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.132468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.132627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.132707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.132834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.132886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.133021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.133049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.133289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.133339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.133457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.133508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.133608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.133634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.133821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.133868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.133988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.134030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.134194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.134224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.134355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.134396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.134562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.134605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.134729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.134773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.134873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.134900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.135083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.135137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.135300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.135344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.135565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.135593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.135707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.135755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.135884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.135938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.136123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.136149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.136292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.136334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.136451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.136499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.136622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.136662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.136803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.136849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.136946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.136972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.137117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.137172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.137348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.137396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.137534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.137565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.137690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.137731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.137845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.137885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.138004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.138046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.138189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.138243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.138441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.138536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.138771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.138818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.138985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.139042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.139327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-24 19:21:33.139385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-24 19:21:33.139632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.139684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.139789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.139817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.139948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.140028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.140155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.140194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.140428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.140493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.140595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.140621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.140764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.140791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.140929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.140956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.141078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.141146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.141344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.141383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.141593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.141635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.141762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.141843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.142026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.142054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.142240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.142288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.142427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.142473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.142601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.142646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.142811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.142841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.142970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.143008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.143162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.143206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.143348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.143388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.143583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.143608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.143792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.143840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.143967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.144010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.144138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.144179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.144335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.144379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.144506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.144549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.144722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.144764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.144884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.144933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.145083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.145133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.145336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.145399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.145597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.145654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.145779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.145846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.145985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.146065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.146182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.146239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.146413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.146442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.146608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.146639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-24 19:21:33.146781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-24 19:21:33.146825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.146989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.147031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.147178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.147222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.147375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.147440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.147603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.147662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.147834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.147902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.148060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.148085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.148285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.148324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.148455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.148513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.148646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.148685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.148784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.148810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.148925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.148970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.149089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.149134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.149313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.149339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.149433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.149458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.149570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.149599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.149724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.149792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.149924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.149979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.150127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.150165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.150334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.150387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.150518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.150583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.150734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.150761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.150879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.150923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.151048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.151114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.151251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.151296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.151455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.151511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.151659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.151698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.151830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.151870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.151990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.152035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.152153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.152196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.152413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.152475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.152638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.152699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.152928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.152981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.153244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.153294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.153469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.153517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.153707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.153767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-24 19:21:33.154004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-24 19:21:33.154069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.154246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.154273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.154422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.154472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.154709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.154746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.154970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.155026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.155193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.155246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.155457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.155521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.155700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.155753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.155889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.155934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.156067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.156093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.156264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.156295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.156451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.156476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.156611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.156656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.156826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.156868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.156992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.157038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.157186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.157251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.157378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.157423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.157534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.157566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.157706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.157753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.157887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.157931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.158046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.158090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.158204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.158250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.158367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.158409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.158516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.158543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.158707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.158740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.158882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.158936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.159053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.159099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.159213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.159258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.159379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.159427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.159547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.159582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.159714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-24 19:21:33.159761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-24 19:21:33.159901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.159930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.160143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.160203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.160370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.160428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.160651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.160683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.160818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.160864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.161013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.161062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.161204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.161266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.161398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.161426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.161543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.161570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.161688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.161735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.161872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.161914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.162049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.162097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.162266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.162306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.162460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.162524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.162697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.162758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.162953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.162999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.163184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.163216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.163404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.163447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.163634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.163699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.163823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.163866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.163991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.164037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.164146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.164185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.164283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.164309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.164423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.164469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.164619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.164681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.164863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.164921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.165052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.165137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.165263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.165329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.165503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.165548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.165666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.165694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.165833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.165878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.166047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.166110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.166329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.166384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.166501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.166542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.166669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.166713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.166859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-24 19:21:33.166917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-24 19:21:33.167049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.167099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.167211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.167242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.167431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.167473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.167593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.167640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.167758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.167822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.167963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.168007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.168154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.168198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.168299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.168327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.168469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.168503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.168649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.168707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.168881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.168951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.169109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.169146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.169291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.169349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.169533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.169560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.169779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.169839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.170016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.170075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.170229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.170265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.170438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.170495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.170596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.170625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.170750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.170795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.170919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.170964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.171085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.171131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.171236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.171265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.171410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.171455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.171660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.171712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.171871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.171912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.172029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.172074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.172222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.172252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.172387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.172434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.172567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.172619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.172776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.172820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.172915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.172942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.173058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.173102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.173220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.173266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.173363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.173389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.173500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.173548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-24 19:21:33.173649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-24 19:21:33.173680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.173826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.173867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.173991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.174046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.174161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.174207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.174317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.174364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.174460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.174502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.174631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.174676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.174774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.174801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.174928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.174995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.175129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.175173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.175282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.175332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.175505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.175548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.175662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.175709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.175808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.175835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.175942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.175970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.176095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.176140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.176267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.176293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.176419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.176445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.176544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.176570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.176671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.176697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.176866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.176896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.177005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.177031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.177124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.177150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.177261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.177306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.177402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.177429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.177566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.177609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.177738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.177796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.177976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.178018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.178137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.178182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.178286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.178314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.178424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.178450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.178560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.178588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.178756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.178786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.178926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.178973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.179088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.179133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.179289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.179341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-24 19:21:33.179493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-24 19:21:33.179542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.179704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.179735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.179893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.179934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.180055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.180103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.180228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.180274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.180370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.180404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.180501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.180528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.180651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.180695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.180824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.180869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.180984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.181033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.181159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.181185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.181289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.181317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.181448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.181475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.181628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.181680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.181827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.181868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.181983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.182029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.182146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.182201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.182326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.182368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.182493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.182540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.182662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.182729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.182842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.182868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.182988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.183036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.183150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.183195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.183295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.183323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.183424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.183452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.183630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.183674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.183778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.183819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.183973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.184015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.184139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.184180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.184289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.184319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.184476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.184512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.184636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.184665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.184808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-24 19:21:33.184849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-24 19:21:33.184964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.185009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.185122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.185162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.185290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.185356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.185495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.185539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.185700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.185730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.185902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.185932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.186096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.186154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.186268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.186294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.186407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.186451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.186585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.186625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.186752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.186797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.186912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.186962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.187081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.187122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.187237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.187292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.187440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.187493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.187610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.187658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.187761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.187790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.187893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.187920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.188012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.188038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.188156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.188199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.188327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.188380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.188515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.188557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.188722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.188752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.188883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.188926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.189045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.189090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.189239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.189284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.189404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.189447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.189576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.189619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.189767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.189807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.189920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.189947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.190064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.190104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.190218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.190261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.190366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.190391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.190532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.190575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-24 19:21:33.190718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-24 19:21:33.190759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.190872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.190917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.191031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.191077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.191187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.191233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.191348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.191389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.191493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.191519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.191648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.191712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.191844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.191889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.192031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.192071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.192194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.192263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.192398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.192442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.192554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.192601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.192791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.192838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.192960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.193001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.193139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.193183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.193282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.193309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.193422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.193470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.193595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.193642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.193752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.193802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.193943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.193986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.194095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.194130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.194265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.194311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.194427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.194468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.194598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.194645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.194768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.194821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.194934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.194982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.195136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.195191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.195330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.195375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.195501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.195547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.195668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.195713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.195843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.195883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.196009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.196053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.196316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.196370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.196495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.196538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.196636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.196664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.196764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-24 19:21:33.196789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-24 19:21:33.196956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.197007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.197127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.197168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.197291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.197331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.197443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.197497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.197620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.197664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.197820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.197859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.197976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.198019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.198155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.198215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.198343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.198384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.198534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.198584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.198705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.198748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.198851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.198878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.198977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.199003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.199098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.199124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.199221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.199246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.199344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.199370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.199493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.199538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.199692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.199749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.199863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.199890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.200008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.200052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.200171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.200216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.200368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.200419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.200520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.200553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.200664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.200706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.200804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.200831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.200948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.200993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.201088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.201114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.201324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.201352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.201486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.201515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.201648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.201678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.201786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.201811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.201942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.201968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.202089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.202114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.202234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.202278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.202427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.202519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.202704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.202757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-24 19:21:33.202935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-24 19:21:33.202989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.203089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.203115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.203230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.203264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.203448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.203512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.203629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.203674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.203822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.203863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.204018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.204073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.204191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.204216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.204384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.204416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.204541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.204574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.204695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.204721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.204820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.204845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.204954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.204987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.205148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.205235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.205391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.205432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.205616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.205643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.205752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.205794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.205925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.205967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.206093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.206131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.206240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.206280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.206458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.206489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.206606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.206649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.206750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.206779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.206911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.206953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.207194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.207245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.207357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.207403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.207554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.207596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.207713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.207739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.207914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.207971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.208078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.208111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.208306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.208333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.208435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.208460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.208595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.208636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.208785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.208870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.209016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.209065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.209229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.209257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.209389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-24 19:21:33.209430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-24 19:21:33.209537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.209583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.209708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.209750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.209866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.209896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.210027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.210076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.210250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.210307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.210430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.210520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.210638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.210671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.210810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.210852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.211021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.211076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.211247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.211303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.211456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.211507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.211694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.211740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.211975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.212023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.212154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.212181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.212311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.212394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.212490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.212517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.212643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.212684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.212875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.212929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.213049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.213091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.213218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.213260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.213359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.213387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.213628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.213681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.213834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.213878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.214024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.214085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.214188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.214215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.214391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.214444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.214579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.214625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.214726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.214753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.214870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.214912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-24 19:21:33.215087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-24 19:21:33.215140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.215328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.215386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.215573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.215623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.215780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.215823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.216011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.216061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.216156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.216182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.216352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.216380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.216492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.216519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.216632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.216678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.216811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.216836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.216945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.216990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.217126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.217171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.217314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.217348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.217523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.217549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.217698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.217781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.217885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.217911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.218052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.218094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.218204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.218236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.218347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.218373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.218489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.218529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.218654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.218694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.218869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.218895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.219011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.219051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.219181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.219215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.219381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.219442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.219552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.219581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.219672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.219698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.219805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.219851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.219963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.220010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.220180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.220238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.220415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.220468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.220660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.220715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.220964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.221016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.221130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.221191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.221307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.221352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.221534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.221560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.221672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-24 19:21:33.221715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-24 19:21:33.221826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.221870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.222014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.222055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.222193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.222236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.222349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.222395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.222508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.222534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.222660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.222701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.222819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.222861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.222988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.223033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.223135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.223165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.223291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.223376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.223558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.223612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.223725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.223757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.223906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.223964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.224111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.224156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.224277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.224320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.224435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.224475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.224621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.224664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.224774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.224819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.224966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.225029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.225181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.225267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.225393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.225437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.225555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.225581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.225694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.225735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.225827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.225853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.225953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.225981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.226092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.226136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.226236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.226266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.226439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.226511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.226700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.226748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.226899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.226951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.227071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.227128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.227255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.227297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.227458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.227529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.227629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.227656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.227779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.227813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.227998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.228051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-24 19:21:33.228219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-24 19:21:33.228278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.228375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.228401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.228519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.228548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.228652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.228679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.228791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.228834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.228931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.228957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.229082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.229135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.229255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.229298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.229412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.229454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.229587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.229636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.229753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.229793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.229910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.229956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.230085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.230143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.230250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.230285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.230422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.230476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.230601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.230647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.230749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.230776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.230891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.230935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.231044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.231070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.231216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.231296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.231408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.231452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.231607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.231652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.231822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.231872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.231975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.232001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.232100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.232129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.232222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.232247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.232347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.232374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.232524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.232551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.232669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.232710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.232854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.232919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.233051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.233107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.233252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.233292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.233403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.233447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.233566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.233591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.233711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.233744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-24 19:21:33.233926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-24 19:21:33.233978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.234136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.234190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.234339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.234404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.234530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.234563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.234713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.234767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.234906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.234988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.235082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.235107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.235278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.235336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.235462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.235513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.235681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.235731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.235906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.235959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.236098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.236124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.236243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.236305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.236432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.236472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.236595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.236628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.236783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.236848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.236945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.236973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.237091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.237132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.237249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.237304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.237429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.237457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.237606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.237691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.237865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.237915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.238021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.238049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.238161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.238194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.238333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.238376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.238556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.238584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.238703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.238747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.238877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.238921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.239084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.239137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.239263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.239331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.239538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.239569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.239689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.239731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.239899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.239953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.240051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.240078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.240175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.240200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.240294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.240319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-24 19:21:33.240413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-24 19:21:33.240438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.240628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.240679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.240845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.240893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.240993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.241019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.241115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.241141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.241244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.241273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.241374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.241399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.241503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.241530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.241653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.241733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.241869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.241895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.242013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.242054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.242169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.242212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.242308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.242333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.242449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.242572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.242699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.242764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.242962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.243012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.243109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.243136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.243292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.243346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.243497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.243541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.243669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.243753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.243936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.243984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.244090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.244123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.244252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.244294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.244413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.244454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.244565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.244592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.244712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.244779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.244918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.244963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.245141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.245190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.245308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.245353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.245450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.245476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.245637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-24 19:21:33.245719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-24 19:21:33.245840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.245883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.245980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.246006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.246128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.246186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.246314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.246359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.246512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.246556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.246707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.246771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.246930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.246999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.247130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.247171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.247337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.247390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.247534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.247617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.247780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.247806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.247926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.247971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.248098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.248182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.248280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.248307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.248452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.248542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.248732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.248783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.248899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.248932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.249065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.249110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.249205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.249231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.249342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.249386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.249497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.249553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.249651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.249677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.249854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.249881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.250034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.250104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.250288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.250340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.250524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.250575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.250692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.250737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.250832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.250858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.251040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.251096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.251279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.251333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.251451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.251529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.251727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.251779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.251925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.251994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.252181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.252234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.252433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.252499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.252624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.252668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-24 19:21:33.252851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-24 19:21:33.252904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.253080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.253126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.253279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.253325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.253442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.253498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.253679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.253725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.253873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.253953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.254086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.254129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.254261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.254344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.254466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.254515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.254633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.254676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.254826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.254893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.254991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.255017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.255182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.255231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.255401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.255454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.255591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.255638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.255741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.255769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.255940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.255993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.256108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.256150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.256241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.256267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.256378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.256424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.256526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.256553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.256671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.256714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.256889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.256940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.257135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.257187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.257303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.257336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.257467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.257517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.257648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.257692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.257789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.257815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.257925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.257969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.258155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.258209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.258330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.258395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.258610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.258658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.258819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.258885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.259064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.259117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.259234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.259276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.259387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.259427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-24 19:21:33.259534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-24 19:21:33.259562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.259686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.259729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.259911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.259961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.260100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.260155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.260329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.260384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.260488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.260515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.260644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.260683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.260798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.260841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.261018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.261073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.261188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.261236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.261332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.261362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.261469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.261527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.261703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.261756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.261899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.261948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.262084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.262129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.262230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.262259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.262361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.262388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.262512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.262556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.262672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.262732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.262853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.262881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.262980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.263007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.263103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.263129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.263221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.263247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.263373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.263399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.263544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.263597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.263722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.263764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.263892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.263938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.264104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.264158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.264275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.264316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.264430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.264476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.264633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.264714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.264815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.264842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.264942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.264968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.265211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.265264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.265361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.265387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.265627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.265679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.265799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-24 19:21:33.265840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-24 19:21:33.266011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.266064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.266191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.266236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.266362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.266405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.266519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.266545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.266693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.266757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.266887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.266932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.267055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.267118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.267301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.267360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.267513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.267560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.267703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.267756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.267892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.267937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.268106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.268158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.268287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.268332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.268462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.268551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.268657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.268737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.268833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.268860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.268983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.269025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.269122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.269149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.269262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.269291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.269396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.269423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.269561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.269589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.269687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.269712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.269812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.269837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.269976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.270002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.270133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.270192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.270330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.270374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.270468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.270504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.270603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.270630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.270748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.270789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.270895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.270928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.271186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.271236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.271476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.271550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.271654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.271682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.271872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.271929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.272056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.272101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.272220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.272277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.272401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.272443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.272581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.411 [2024-07-24 19:21:33.272625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.411 qpair failed and we were unable to recover it. 00:24:27.411 [2024-07-24 19:21:33.272762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.272842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.272960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.273003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.273108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.273160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.273273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.273334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.273435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.273463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.273662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.273714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.273839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.273878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.273999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.274040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.274176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.274233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.274332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.274357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.274463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.274515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.274628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.274671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.274805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.274849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.274974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.275019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.275158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.275186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.275279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.275305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.275429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.275471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.275602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.275647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.275765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.275832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.275967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.276010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.276168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.276219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.276331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.276373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.276503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.276545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.276661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.276705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.276829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.276881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.277008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.277061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.277206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.277270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.277387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.277412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.277536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.277580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.277705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.277753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.277875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.277917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.278030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.278073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.278256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.278307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.412 [2024-07-24 19:21:33.278398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.412 [2024-07-24 19:21:33.278423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.412 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.278528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.278562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.278692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.278725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.278862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.278903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.279000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.279026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.279135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.279176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.279275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.279302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.279392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.279418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.279532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.279566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.279680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.279707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.279829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.279872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.279963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.279988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.280082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.280108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.280234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.280302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.280406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.280431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.280548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.280594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.280694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.280722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.280899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.280955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.281061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.281093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.281220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.281263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.281424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.281450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.281570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.281613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.281749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.281774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.281924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.282011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.282124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.282168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.282295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.282360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.282511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.282537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.282652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.282711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.282833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.282880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.282991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.283037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.283155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.283222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.283336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.283362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.283473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.283523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.283647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.283689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.283808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.283849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.283962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.284006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.284175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.284230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.284357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.284426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.284604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.413 [2024-07-24 19:21:33.284647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.413 qpair failed and we were unable to recover it. 00:24:27.413 [2024-07-24 19:21:33.284825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.284884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.284999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.285040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.285205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.285264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.285366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.285396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.285542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.285604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.285717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.285763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.285929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.285981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.286092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.286137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.286247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.286290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.286407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.286450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.286586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.286632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.286735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.286762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.286877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.286933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.287052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.287103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.287237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.287263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.287374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.287417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.287519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.287547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.287665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.287698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.287827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.287867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.287978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.288021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.288134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.288177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.288271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.288297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.288392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.288418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.288508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.288535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.288645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.288691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.288808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.288851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.288963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.289006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.289097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.289123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.289228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.289261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.289373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.289398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.289499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.289528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.289648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.289690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.289808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.289849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.289970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.290015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.290133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.290179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.290356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.290411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.290533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.290579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.290694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.290737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.290853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.290894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.291000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.291044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.291232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.291281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.291396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.291442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.414 qpair failed and we were unable to recover it. 00:24:27.414 [2024-07-24 19:21:33.291561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.414 [2024-07-24 19:21:33.291606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.291730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.291769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.291882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.291926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.292046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.292090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.292267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.292319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.292413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.292439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.292565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.292611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.292729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.292772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.292894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.292938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.293111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.293166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.293292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.293361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.293471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.293507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.293623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.293665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.293792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.293840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.294018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.294074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.294184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.294229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.294367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.294424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.294528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.294554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.294652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.294679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.294790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.294824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.294932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.294958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.295051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.295077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.295204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.295287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.295410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.295456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.295641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.295690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.295835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.295921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.296046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.296090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.296208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.296251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.296422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.296475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.296660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.296708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.296869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.296929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.297048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.297091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.297203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.297238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.297362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.297411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.297551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.297604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.297767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.297832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.298009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.298068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.298235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.298287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.298411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.298456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.298568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.298597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.298719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.298782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.298910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.298953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.299072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.415 [2024-07-24 19:21:33.299132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.415 qpair failed and we were unable to recover it. 00:24:27.415 [2024-07-24 19:21:33.299375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.299428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.299597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.299624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.299794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.299847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.299956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.300000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.300119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.300164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.300289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.300332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.300443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.300497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.300621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.300684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.300805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.300849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.300961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.301006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.301158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.301239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.301367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.301409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.301601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.301659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.301846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.301896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.302021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.302063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.302170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.302197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.302363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.302390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.302500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.302527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.302622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.302648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.302756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.302801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.303001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.303058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.303179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.303224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.303354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.303398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.303501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.303528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.303695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.303749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.303892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.303938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.304120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.304172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.304320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.304389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.304560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.304604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.304716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.304761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.304893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.304940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.305117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.305170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.305287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.305332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.305455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.305502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.305690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.305739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.305930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.416 [2024-07-24 19:21:33.305985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.416 qpair failed and we were unable to recover it. 00:24:27.416 [2024-07-24 19:21:33.306180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.306229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.306366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.306411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.306517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.306552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.306754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.306808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.306919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.306965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.307061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.307087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.307187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.307215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.307312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.307339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.307431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.307456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.307582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.307627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.307790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.307817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.307990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.308043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.308218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.308271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.308386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.308431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.308621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.308675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.308821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.308909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.309033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.309079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.309254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.309307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.309429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.309474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.309611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.309652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.309773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.309814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.310060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.310115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.310234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.310278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.310400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.310444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.310624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.310683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.310852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.310904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.311028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.311094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.311203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.311229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.311339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.311383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.311501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.311546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.311695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.311778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.311898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.311941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.312124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.312172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.312281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.312314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.312459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.312508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.312654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.312733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.312882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.312924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.313022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.313049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.313237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.313263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.313371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.313416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.313517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.313545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.313730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.313781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.417 [2024-07-24 19:21:33.313880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.417 [2024-07-24 19:21:33.313907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.417 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.313999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.314025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.314194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.314246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.314415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.314465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.314603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.314647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.314821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.314871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.314985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.315027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.315138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.315182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.315288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.315321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.315439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.315469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.315683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.315738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.315854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.315913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.316008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.316034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.316202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.316259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.316376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.316420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.316551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.316608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.316755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.316793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.316914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.316955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.317144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.317195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.317344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.317429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.317609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.317659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.317789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.317872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.317992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.318035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.318158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.318212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.318333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.318392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.318538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.318565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.318738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.318792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.318896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.318923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.319059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.319102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.319220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.319263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.319385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.319431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.319537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.319564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.319734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.319787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.319903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.319946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.320115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.320164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.320288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.320331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.320502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.320562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.320744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.320793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.320982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.321042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.321136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.321161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.321324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.321380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.321502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.321544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.418 [2024-07-24 19:21:33.321727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.418 [2024-07-24 19:21:33.321779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.418 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.321890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.321933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.322032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.322061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.322233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.322286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.322464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.322528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.322707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.322732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.322922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.322973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.323099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.323142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.323266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.323331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.323535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.323560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.323796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.323844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.323955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.323998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.324169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.324222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.324407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.324460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.324706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.324760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.324950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.325002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.325099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.325124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.325219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.325244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.325404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.325430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.325553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.325598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.325828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.325877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.326012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.326069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.326171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.326197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.326328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.326354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.326453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.326487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.326619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.326646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.326767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.326809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.326932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.326974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.327095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.327140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.327326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.327377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.327506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.327548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.327722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.327778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.327893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.327955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.328166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.328192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.328451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.328515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.328650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.328704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.328922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.328975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.329099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.329144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.329321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.329373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.329498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.329544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.329674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.329717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.329889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.329940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.330109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.330160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.330359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.330409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.330526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.419 [2024-07-24 19:21:33.330587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.419 qpair failed and we were unable to recover it. 00:24:27.419 [2024-07-24 19:21:33.330683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.330709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.330881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.330929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.331078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.331140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.331245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.331274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.331404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.331444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.331565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.331613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.331875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.331923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.332090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.332145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.332317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.332368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.332477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.332521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.332660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.332700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.332872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.332920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.333019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.333045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.333178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.333222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.333344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.333411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.333524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.333550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.333646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.333673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.333854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.333901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.334024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.334065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.334233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.334282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.334454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.334524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.334733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.334782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.334910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.334953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.335136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.335186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.335309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.335351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.335523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.335550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.335651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.335680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.335779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.335805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.335968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.336015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.336128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.336163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.336305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.336353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.336532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.336593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.336770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.336830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.336933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.336962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.337100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.337125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.337247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.337314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.337540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.337593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.337692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.337719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.337868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.337919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.338046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.420 [2024-07-24 19:21:33.338113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.420 qpair failed and we were unable to recover it. 00:24:27.420 [2024-07-24 19:21:33.338300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.338349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.338473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.338560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.338730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.338789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.338987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.339034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.339140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.339169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.339289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.339332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.339429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.339457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.339590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.339636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.339769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.339809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.339979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.340029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.340149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.340202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.340353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.340401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.340562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.340615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.340788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.340839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.340969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.341009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.341166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.341211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.341329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.341375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.341541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.341594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.341771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.341829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.341959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.342005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.342191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.342243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.342423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.342476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.342632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.342698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.342882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.342934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.343061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.343100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.343310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.343363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.343463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.343500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.343643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.343669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.343861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.343908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.344056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.344110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.344206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.344236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.421 qpair failed and we were unable to recover it. 00:24:27.421 [2024-07-24 19:21:33.344505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.421 [2024-07-24 19:21:33.344557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.344684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.344740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.344874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.344919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.345046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.345087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.345197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.345242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.345362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.345402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.345539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.345596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.345707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.345752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.345863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.345897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.346067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.346123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.346240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.346284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.346461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.346525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.346723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.346774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.346900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.346966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.347107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.347154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.347384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.347431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.347558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.347607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.347716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.347750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.347888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.347953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.348129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.348187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.348360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.348417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.348522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.348549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.348693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.348747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.348874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.348919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.349098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.349125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.349258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.349310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.349426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.349486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.349603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.349638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.349848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.349907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.350083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.350134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.350318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.350369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.350468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.350499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.350620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.350662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.350792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.350855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.350973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.351057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.422 [2024-07-24 19:21:33.351192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.422 [2024-07-24 19:21:33.351245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.422 qpair failed and we were unable to recover it. 00:24:27.423 [2024-07-24 19:21:33.351370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.423 [2024-07-24 19:21:33.351451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.423 qpair failed and we were unable to recover it. 00:24:27.423 [2024-07-24 19:21:33.351615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.423 [2024-07-24 19:21:33.351664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.423 qpair failed and we were unable to recover it. 00:24:27.423 [2024-07-24 19:21:33.351883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.423 [2024-07-24 19:21:33.351930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.423 qpair failed and we were unable to recover it. 00:24:27.423 [2024-07-24 19:21:33.352081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.423 [2024-07-24 19:21:33.352142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.423 qpair failed and we were unable to recover it. 00:24:27.423 [2024-07-24 19:21:33.352278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.423 [2024-07-24 19:21:33.352332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.423 qpair failed and we were unable to recover it. 00:24:27.423 [2024-07-24 19:21:33.352431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.423 [2024-07-24 19:21:33.352456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.423 qpair failed and we were unable to recover it. 00:24:27.423 [2024-07-24 19:21:33.352660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.423 [2024-07-24 19:21:33.352713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.423 qpair failed and we were unable to recover it. 00:24:27.423 [2024-07-24 19:21:33.352870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.423 [2024-07-24 19:21:33.352939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.423 qpair failed and we were unable to recover it. 00:24:27.423 [2024-07-24 19:21:33.353123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.423 [2024-07-24 19:21:33.353173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.423 qpair failed and we were unable to recover it. 00:24:27.423 [2024-07-24 19:21:33.353297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.423 [2024-07-24 19:21:33.353340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.423 qpair failed and we were unable to recover it. 00:24:27.423 [2024-07-24 19:21:33.353458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.423 [2024-07-24 19:21:33.353529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.423 qpair failed and we were unable to recover it. 00:24:27.423 [2024-07-24 19:21:33.353705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.423 [2024-07-24 19:21:33.353781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.423 qpair failed and we were unable to recover it. 00:24:27.423 [2024-07-24 19:21:33.354027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.423 [2024-07-24 19:21:33.354082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.423 qpair failed and we were unable to recover it. 00:24:27.423 [2024-07-24 19:21:33.354182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.423 [2024-07-24 19:21:33.354210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.423 qpair failed and we were unable to recover it. 00:24:27.423 [2024-07-24 19:21:33.354417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.423 [2024-07-24 19:21:33.354472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.423 qpair failed and we were unable to recover it. 00:24:27.423 [2024-07-24 19:21:33.354580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.423 [2024-07-24 19:21:33.354605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.423 qpair failed and we were unable to recover it. 00:24:27.423 [2024-07-24 19:21:33.354709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.423 [2024-07-24 19:21:33.354734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-24 19:21:33.354930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-24 19:21:33.354982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-24 19:21:33.355146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-24 19:21:33.355213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-24 19:21:33.355453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-24 19:21:33.355537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-24 19:21:33.355679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-24 19:21:33.355717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-24 19:21:33.355914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.355976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.356133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.356160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.356417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.356476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.356649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.356675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.356887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.356946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.357129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.357189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.357347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.357390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.357551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.357618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.357869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.357931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.358090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.358121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.358351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.358409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.358644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.358694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.358826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.358907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.359029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.359076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.359267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.359332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.359522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.359548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.359663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.359722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.359841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.359886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.359984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.360010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.360133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.360179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.360350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.360414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.360539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.360599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.360733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.360774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.360899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.360945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.361142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.361189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.361310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.361371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.361467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.361498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.361659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.361710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.361859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.361925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.362067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.362114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.362240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.362279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.362381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.362408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.362577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.362627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.362794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.362856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.363030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.363080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.363266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-24 19:21:33.363292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-24 19:21:33.363471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.363530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.363633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.363660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.363796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.363851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.363970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.364024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.364146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.364214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.364352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.364396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.364557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.364611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.364726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.364783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.364904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.364948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.365126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.365151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.365275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.365319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.365412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.365438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.365640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.365691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.365842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.365894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.366018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.366077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.366205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.366286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.366516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.366543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.366641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.366668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.366769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.366796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.366997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.367051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.367234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.367285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.367432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.367509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.367629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.367686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.367814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.367894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.368068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.368116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.368209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.368235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.368344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.368405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.368535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.368597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.368700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.368727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.368855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.368912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.369109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.369162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.369310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.369339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.369487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.369516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.369652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.369694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.369814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.369860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.369961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-24 19:21:33.369988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-24 19:21:33.370116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.370200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.370448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.370509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.370609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.370636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.370812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.370838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.371015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.371070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.371282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.371346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.371524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.371586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.371789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.371839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.372051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.372102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.372235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.372303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.372447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.372535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.372703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.372751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.372852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.372880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.372995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.373043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.373173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.373256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.373406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.373467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.373665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.373724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.373969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.374030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.374231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.374293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.374544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.374605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.374774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.374840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.375001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.375027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.375239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.375297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.375531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.375570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.375756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.375828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.376052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.376111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.376287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.376343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.376539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.376597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.376725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.376776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.376984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.377036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.377138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.377164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.377353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.377410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.377533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.377595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.377738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.377781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.377958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.377984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-24 19:21:33.378230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-24 19:21:33.378288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.378442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.378510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.378746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.378800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.378930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.378982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.379130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.379183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.379380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.379432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.379536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.379564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.379747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.379800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.380030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.380082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.380232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.380286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.380470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.380525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.380730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.380779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.380960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.381011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.381254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.381307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.381437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.381493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.381703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.381752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.381921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.381975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.382068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.382093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.382214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.382262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.382394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.382445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.382635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.382685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.382830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.382876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.383059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.383115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.383263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.383329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.383477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.383507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.383613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.383640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.383827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.383875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.384025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.384089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.384187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.384213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.384372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.384430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.384636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.384705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.384852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.384932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.385114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.385163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.385304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.385332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.385466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.385524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.385716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.385768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-24 19:21:33.385914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-24 19:21:33.385946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.386142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.386190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.386365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.386416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.386574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.386613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.386814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.386841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.387065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.387126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.387275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.387337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.387507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.387562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.387689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.387741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.387890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.387971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.388094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.388138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.388293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.388341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.388463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.388520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.388708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.388756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.388926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.388975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.389099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.389145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.389277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.389326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.389422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.389509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.389622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.389680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.389796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.389841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.390013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.390061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.390174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.390231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.390474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.390529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.390713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.390762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.390949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.390999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.391121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.391162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.391318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.391368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.391550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.391606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.391735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.391782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.391967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-24 19:21:33.392021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-24 19:21:33.392186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.392244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.392405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.392433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.392536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.392562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.392731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.392784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.392911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.392992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.393122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.393161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.393346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.393395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.393637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.393690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.393809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.393853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.394026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.394077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.394252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.394302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.394430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.394490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.394646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.394708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.394846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.394898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.395023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.395104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.395236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.395290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.395489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.395517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.395646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.395694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.395830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.395909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.396076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.396129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.396295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.396359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.396522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.396550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.396649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.396674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.396804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.396884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.397129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.397190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.397369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.397416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.397559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.397613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.397722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.397748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.397960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.398011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.398197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.398247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.398393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.398460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.398696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.398752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.399006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.399054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.399304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.399350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.399506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.399573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.399709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-24 19:21:33.399765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-24 19:21:33.399896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.399943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.400099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.400159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.400352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.400402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.400653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.400701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.400847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.400872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.400972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.400999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.401093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.401119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.401249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.401330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.401443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.401469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.401575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.401602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.401752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.401778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.401873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.401899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.402065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.402115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.402214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.402240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.402390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.402458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.402663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.402717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.402902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.402967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.403193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.403253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.403421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.403496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.403676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.403737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.403939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.403995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.404177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.404232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.404431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.404519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.404697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.404759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.404922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.404987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.405183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.405241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.405530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.405590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.405765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.405825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.405993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.406019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.406262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.406320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.406491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.406517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.406679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.406737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.406996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.407057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.407314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.407371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.407626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.407685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.407859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-24 19:21:33.407900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-24 19:21:33.408056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.408114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.408268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.408318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.408491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.408518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.408681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.408722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.408918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.408958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.409212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.409270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.409457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.409536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.409709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.409774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.409932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.409971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.410123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.410172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.410305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.410372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.410518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.410564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.410739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.410805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.410976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.411041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.411204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.411262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.411417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.411446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.411602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.411669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.411800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.411841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.412015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.412076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.412202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.412258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.412375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.412402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.412570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.412619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.412869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.412916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.413042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.413089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.413187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.413214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.413340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.413385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.413490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.413519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.413666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.413732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.413917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.413976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.414097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.414145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.414265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.414309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.414409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.414438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.414701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.414750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.414871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.414920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.415061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.415088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.415201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-24 19:21:33.415250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-24 19:21:33.415374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.415420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.415591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.415657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.415785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.415831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.415956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.416003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.416130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.416188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.416383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.416427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.416551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.416598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.416699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.416725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.416858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.416905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.417076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.417122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.417255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.417302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.417441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.417492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.417623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.417669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.417791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.417847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.418043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.418095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.418259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.418321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.418423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.418452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.418556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.418582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.418771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.418825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.418943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.418988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.419170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.419220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.419349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.419430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.419573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.419655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.419827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.419881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.420001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.420047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.420168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.420215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.420306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.420331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.420450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.420505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.420599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.420624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.420717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.420741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.420868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.420915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.421055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.421099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.421280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.421335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.421460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.421516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.421644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.421685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.421796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-24 19:21:33.421844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-24 19:21:33.421963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.422010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.422157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.422216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.422400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.422456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.422597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.422638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.422807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.422859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.423040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.423099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.423289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.423342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.423465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.423511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.423634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.423682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.423805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.423851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.424003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.424068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.424200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.424281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.424411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.424450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.424576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.424622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.424782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.424838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.425023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.425075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.425171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.425197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.425325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.425408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.425523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.425561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.425698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.425777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.425896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.425940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.426128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.426180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.426335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.426378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.426498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.426547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.426664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.426713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.426892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.426942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.427105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.427155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.427337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.427382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.427488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.427515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.427713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.427761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.427907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.427934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-24 19:21:33.428086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-24 19:21:33.428141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.428266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.428315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.428509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.428556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.428744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.428800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.428929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.429011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.429136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.429182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.429277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.429302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.429420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.429467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.429620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.429646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.429744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.429769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.429911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.429953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.430082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.430129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.430245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.430289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.430433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.430494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.430646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.430713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.430809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.430834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.430946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.430993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.431113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.431180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.431396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.431424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.431552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.431597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.431698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.431725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.431884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.431945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.432067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.432115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.432306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.432359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.432505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.432556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.432680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.432728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.432912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.432965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.433134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.433195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.433392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.433443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.433609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.433695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.433880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.433930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.434076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.434105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.434246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.434293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.434445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.434515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.434647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.434696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.434809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.434855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.434972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-24 19:21:33.435017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-24 19:21:33.435198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.435252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.435408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.435458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.435665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.435717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.435814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.435840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.436022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.436077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.436299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.436356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.436496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.436537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.436668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.436708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.436835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.436918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.437035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.437082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.437222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.437291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.437466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.437529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.437675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.437710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.437908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.437957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.438158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.438184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.438376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.438430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.438632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.438684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.438871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.438898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.438999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.439025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.439154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.439193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.439338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.439389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.439494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.439527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.439644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.439694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.439820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.439868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.440067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.440125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.440319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.440366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.440568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.440628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.440734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.440761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.440857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.440882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.441005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.441046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.441172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.441217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.441350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.441399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.441549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.441597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.441732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.441777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.441875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.441901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.442027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-24 19:21:33.442071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-24 19:21:33.442272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.442329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.442453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.442526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.442780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.442830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.443000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.443063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.443212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.443274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.443444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.443501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.443705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.443753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.443909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.443970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.444137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.444162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.444261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.444291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.444449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.444526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.444652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.444698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.444820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.444876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.444988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.445026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.445245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.445299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.445447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.445476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.445615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.445666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.445781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.445807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.445909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.445934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.446060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.446104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.446251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.446281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.446510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.446569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.446715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.446789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.446988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.447047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.447289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.447346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.447541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.447585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.447818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.447876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.448062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.448122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.448367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.448425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.448604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.448655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.448788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.448840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.448968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.449017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.449145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.449227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.449346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.449375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.449497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.449544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.449715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.449780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-24 19:21:33.449905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-24 19:21:33.449953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.450153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.450213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.450355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.450401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.450610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.450661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.450783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.450833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.450979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.451006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.451104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.451130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.451232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.451259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.451356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.451382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.451644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.451697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.451833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.451885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.452076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.452130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.452234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.452261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.452381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.452432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.452549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.452596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.452722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.452766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.453018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.453073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.453254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.453306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.453438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.453477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.453597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.453647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.453898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.453950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.454101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.454187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.454286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.454320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.454422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.454449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.454588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.454669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.454796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.454841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.454941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.454968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.455080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.455129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.455273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.455303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.455434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.455532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.455691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.455745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.455933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.455982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.456163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.456216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.456347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.456391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.456533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.456586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.456766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.456817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.456963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-24 19:21:33.457004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-24 19:21:33.457134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.457188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.457314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.457383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.457571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.457638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.457837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.457885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.458069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.458119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.458219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.458246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.458341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.458366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.458517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.458584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.458717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.458762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.458904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.458931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.459112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.459164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.459278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.459325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.459449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.459543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.459676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.459719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.459854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.459902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.460070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.460121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.460305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.460359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.460475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.460536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.460634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.460661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.460783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.460829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.461014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.461066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.461185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.461253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.461419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.461493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.461662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.461724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.461853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.461938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.462062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.462111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.462295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.462345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.462528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.462555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.462672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.462729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.462871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.462898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.463028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.463107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.718 [2024-07-24 19:21:33.463235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.718 [2024-07-24 19:21:33.463318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.718 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.463475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.463546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.463697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.463763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.463975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.464028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.464151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.464196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.464328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.464381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.464545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.464599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.464718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.464763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.464886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.464938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.465057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.465111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.465226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.465275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.465370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.465397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.465493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.465519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.465623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.465649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.465858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.465912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.466078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.466129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.466265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.466311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.466426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.466452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.466580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.466628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.466757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.466803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.466947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.466974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.467103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.467190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.467319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.467367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.467463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.467493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.467663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.467725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.467848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.467893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.468026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.468079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.468214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.468267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.468414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.468462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.468614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.468696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.468790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.468815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.468945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.468992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.469101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.469149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.469295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.469345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.469490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.469533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.469728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.469783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.719 [2024-07-24 19:21:33.469903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.719 [2024-07-24 19:21:33.469950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.719 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.470084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.470150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.470290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.470345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.470503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.470563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.470679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.470726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.470908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.470962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.471090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.471149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.471272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.471320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.471504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.471548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.471731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.471785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.471910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.471957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.472081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.472148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.472261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.472290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.472425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.472515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.472712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.472738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.472851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.472897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.473037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.473117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.473259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.473311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.473430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.473521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.473653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.473702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.473825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.473868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.474016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.474083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.474270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.474320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.474467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.474540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.474667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.474723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.474851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.474898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.474997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.475078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.475208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.475256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.475394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.475462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.475600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.475648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.475775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.475857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.476030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.476091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.476186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.476211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.476330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.476378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.476499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.476545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.476638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.476663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.476832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.720 [2024-07-24 19:21:33.476858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.720 qpair failed and we were unable to recover it. 00:24:27.720 [2024-07-24 19:21:33.476973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.477020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.477118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.477145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.477289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.477345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.477502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.477553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.477742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.477794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.477993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.478040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.478160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.478209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.478361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.478425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.478584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.478633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.478728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.478754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.478887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.478926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.479053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.479135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.479235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.479260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.479377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.479422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.479571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.479631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.479761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.479808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.479973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.480033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.480196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.480260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.480369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.480395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.480495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.480521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.480633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.480679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.480797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.480843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.480969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.481050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.481189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.481272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.481368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.481393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.481551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.481603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.481739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.481820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.481991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.482055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.482188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.482242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.482378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.482462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.482645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.482706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.482907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.482957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.483056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.483083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.483202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.483248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.483372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.483424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.483553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-07-24 19:21:33.483600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.721 qpair failed and we were unable to recover it. 00:24:27.721 [2024-07-24 19:21:33.483713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.483761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.483873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.483931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.484181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.484236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.484495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.484538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.484731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.484780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.484908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.484946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.485085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.485168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.485306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.485348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.485467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.485531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.485683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.485740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.485920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.485970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.486094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.486160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.486296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.486346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.486518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.486569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.486708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.486752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.486856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.486883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.487003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.487056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.487148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.487174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.487311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.487365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.487510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.487537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.487666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.487747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.487842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.487867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.487987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.488054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.488287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.488339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.488449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.488474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.488628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.488673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.488807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.488858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.488980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.489025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.489188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.489239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.489996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.490026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.490160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.490206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.490298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.490323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.490473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.490547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.490674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.490721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.490926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.490975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.491118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.491198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.722 [2024-07-24 19:21:33.491380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.722 [2024-07-24 19:21:33.491429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.722 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.491565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.491606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.491736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.491789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.491905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.491952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.492045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.492071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.492237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.492287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.492414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.492460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.492591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.492631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.492819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.492871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.493067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.493115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.493959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.493991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.494097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.494126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.494256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.494303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.494406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.494433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.494572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.494612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.494793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.494838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.494972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.495027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.495164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.495210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.495380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.495429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.495536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.495563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.495696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.495723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.495824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.495850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.495966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.495996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.496129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.496156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.496273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.496304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.496436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.496462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.496606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.496635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.496738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.496763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.496894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.496920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.497020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.497047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.497195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.497260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.497442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.497498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.723 qpair failed and we were unable to recover it. 00:24:27.723 [2024-07-24 19:21:33.497629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-07-24 19:21:33.497667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.497793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.497832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.497976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.498034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.498216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.498268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.498399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.498439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.498647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.498698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.498852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.498879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.498980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.499006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.499135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.499214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.499403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.499450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.499644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.499697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.499822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.499877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.500024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.500051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.500253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.500302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.500460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.500530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.500636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.500665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.500793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.500876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.501053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.501106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.501270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.501320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.501520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.501572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.501716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.501743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.501918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.501943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.502069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.502122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.502254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.502338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.502542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.502594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.502739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.502766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.502950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.503001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.503166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.503191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.503319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.503365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.503508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.503551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.503654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.503680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.503879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.503927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.504163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.504214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.504354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.504439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.504626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.504688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.504865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.504918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.724 [2024-07-24 19:21:33.505104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-07-24 19:21:33.505154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.724 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.505252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.505279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.505375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.505400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.505614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.505681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.505844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.505901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.506045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.506071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.506257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.506305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.506511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.506556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.506715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.506774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.506909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.506960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.507145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.507198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.507324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.507392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.507522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.507564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.507760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.507787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.507930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.507956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.508099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.508150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.508305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.508357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.508507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.508571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.508755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.508806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.509033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.509082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.509179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.509204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.509324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.509369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.509502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.509546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.509725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.509772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.509869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.509894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.510019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.510100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.510245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.510312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.510417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.510443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.510619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.510669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.510761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.510787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.510909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.510989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.511179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.511221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.511374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.511434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.511576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.511615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.511802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.511857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.511952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.511978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.512093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.512145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.725 qpair failed and we were unable to recover it. 00:24:27.725 [2024-07-24 19:21:33.512241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.725 [2024-07-24 19:21:33.512266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.512367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.512392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.512578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.512635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.512883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.512935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.513059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.513126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.513238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.513263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.513379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.513406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.513505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.513531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.513697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.513746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.513877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.513943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.514086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.514133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.514228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.514254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.514432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.514488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.514656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.514710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.514900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.514930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.515120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.515172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.515391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.515440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.515588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.515635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.515729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.515755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.515934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.515984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.516075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.516100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.516278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.516332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.516460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.516551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.516685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.516740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.516898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.516959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.517109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.517164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.517350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.517402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.517508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.517534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.517729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.517756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.517992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.518044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.518223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.518273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.518441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.518499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.518614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.518674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.518897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.518924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.519022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.519048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.519255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.519309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.726 [2024-07-24 19:21:33.519509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.726 [2024-07-24 19:21:33.519560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.726 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.519752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.519802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.519928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.520009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.520185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.520210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.520409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.520457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.520650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.520701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.520888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.520936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.521061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.521142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.521326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.521373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.521559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.521585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.521777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.521827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.522009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.522062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.522198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.522253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.522444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.522502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.522690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.522738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.522926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.522980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.523109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.523148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.523257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.523297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.523496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.523532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.523715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.523740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.523969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.524017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.524195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.524245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.524421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.524470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.524578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.524603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.524771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.524822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.525074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.525126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.525274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.525300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.525466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.525537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.525673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.525725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.525905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.525961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.526195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.526244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.526423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.526473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.526735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.526787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.727 qpair failed and we were unable to recover it. 00:24:27.727 [2024-07-24 19:21:33.526988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.727 [2024-07-24 19:21:33.527035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.527215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.527265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.527521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.527568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.527829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.527884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.528032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.528096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.528347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.528394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.528549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.528576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.528749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.528796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.528929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.529010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.529112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.529140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.529334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.529383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.529547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.529574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.529703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.529754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.529948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.529995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.530126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.530171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.530311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.530338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.530574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.530602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.530793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.530842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.531045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.531097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.531279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.531329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.531492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.531548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.531677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.531746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.532012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.532059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.532270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.532329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.532525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.532574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.532671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.532697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.532901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.532949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.533177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.533226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.533413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.533464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.533717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.533745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.533945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.533971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.534172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.534223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.534354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.534433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.534537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.534564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.534660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.534686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.728 qpair failed and we were unable to recover it. 00:24:27.728 [2024-07-24 19:21:33.534882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.728 [2024-07-24 19:21:33.534930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.535061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.535141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.535368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.535421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.535608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.535636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.535820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.535900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.536069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.536134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.536398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.536458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.536655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.536726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.536984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.537043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.537208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.537268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.537532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.537559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.537832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.537890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.538151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.538210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.538417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.538478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.538731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.538760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.539022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.539073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.539332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.539382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.539635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.539688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.539856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.539905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.540049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.540074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.540165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.540190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.540317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.540398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.540536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.540578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.540811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.540860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.541049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.541097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.541289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.541342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.541526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.541581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.541679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.541705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.541826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.541874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.542055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.542103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.542279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.542332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.542471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.542530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.542752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.542780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.542983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.543009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.543286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.543343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.543520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.543583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.729 [2024-07-24 19:21:33.543763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.729 [2024-07-24 19:21:33.543827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.729 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.544009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.544059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.544279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.544326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.544430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.544459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.544602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.544660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.544848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.544901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.545093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.545141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.545339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.545389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.545548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.545581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.545765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.545826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.546088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.546138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.546383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.546433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.546660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.546710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.546896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.546945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.547097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.547156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.547309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.547335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.547493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.547522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.547767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.547818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.548017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.548069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.548322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.548369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.548525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.548553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.548770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.548829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.549086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.549141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.549384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.549435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.549640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.549690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.549911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.549961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.550115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.550172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.550335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.550384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.550545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.550572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.550749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.550775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.550982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.551038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.551189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.551249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.551427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.551476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.551671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.551719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.551875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.551939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.552181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.552233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.552432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.730 [2024-07-24 19:21:33.552492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.730 qpair failed and we were unable to recover it. 00:24:27.730 [2024-07-24 19:21:33.552628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.552681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.552814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.552895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.553070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.553119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.553275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.553335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.553564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.553592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.553799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.553858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.554113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.554157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.554395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.554453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.554726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.554785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.555056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.555114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.555297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.555364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.555559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.555614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.555755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.555797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.555975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.556026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.556272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.556325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.556420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.556446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.556551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.556577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.556717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.556773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.557014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.557067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.557161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.557239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.557387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.557414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.557678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.557729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.557881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.557908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.558085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.558130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.558323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.558373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.558531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.558565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.558690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.558745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.558878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.558928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.559118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.559144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.559336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.559386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.559537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.559584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.559724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.559766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.559947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.559997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.560140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.560194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.560325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.560407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.731 [2024-07-24 19:21:33.560632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.731 [2024-07-24 19:21:33.560696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.731 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.560998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.561057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.561232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.561283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.561407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.561476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.561660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.561712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.561838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.561886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.561987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.562014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.562209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.562261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.562365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.562393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.562506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.562533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.562775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.562801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.563007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.563055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.563227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.563277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.563429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.563496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.563739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.563787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.563891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.563919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.564063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.564132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.564297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.564329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.564502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.564556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.564723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.564773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.564899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.564979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.565226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.565282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.565493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.565540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.565678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.565727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.565910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.565965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.566094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.566142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.566239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.566265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.566385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.566445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.566622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.566670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.566851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.566903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.567101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.567150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.567291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.732 [2024-07-24 19:21:33.567345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.732 qpair failed and we were unable to recover it. 00:24:27.732 [2024-07-24 19:21:33.567511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.567568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.567666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.567693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.567847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.567910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.568037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.568091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.568187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.568213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.568360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.568426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.568527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.568553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.568673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.568730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.568901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.568951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.569052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.569081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.569188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.569218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.569321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.569346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.569446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.569472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.569682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.569728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.569920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.569975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.570122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.570187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.570392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.570444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.570589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.570644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.570856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.570904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.571071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.571134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.571278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.571343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.571470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.571526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.571628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.571655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.571757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.571784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.571917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.571972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.572130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.572199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.572330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.572384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.572486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.572514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.572615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.572643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.572743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.572770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.572902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.572952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.573059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.573120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.573300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.573350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.573444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.573470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.573597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.573652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.733 qpair failed and we were unable to recover it. 00:24:27.733 [2024-07-24 19:21:33.573797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.733 [2024-07-24 19:21:33.573861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.574039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.574090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.574211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.574263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.574448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.574513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.574671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.574699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.574852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.574917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.575103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.575161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.575315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.575377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.575548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.575599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.575714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.575768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.575900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.575950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.576141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.576189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.576400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.576452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.576582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.576633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.576828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.576878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.577056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.577105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.577285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.577312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.577524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.577606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.577792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.577859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.578068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.578093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.578246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.578286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.578537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.578563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.578714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.578739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.578977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.579038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.579208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.579267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.579431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.579519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.579702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.579756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.579889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.579969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.580101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.580157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.580334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.580359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.580454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.580488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.580615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.580661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.580760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.580787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.580880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.580905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.581070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.581133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.581369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.734 [2024-07-24 19:21:33.581427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.734 qpair failed and we were unable to recover it. 00:24:27.734 [2024-07-24 19:21:33.581615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.581679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.581863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.581927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.582105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.582165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.582405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.582463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.582658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.582727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.582904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.582967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.583144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.583204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.583418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.583478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.583655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.583684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.583839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.583899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.584078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.584127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.584277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.584341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.584466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.584520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.584616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.584643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.584759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.584821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.584985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.585010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.585199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.585251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.585378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.585428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.585584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.585649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.585798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.585863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.586000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.586054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.586246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.586304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.586437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.586499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.586598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.586624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.586728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.586755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.586931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.586979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.587107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.587190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.587450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.587506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.587691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.587744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.587864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.587931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.588041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.588067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.588199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.588246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.588378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.588424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.588614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.588661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.588811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.588873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.589024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.735 [2024-07-24 19:21:33.589091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.735 qpair failed and we were unable to recover it. 00:24:27.735 [2024-07-24 19:21:33.589270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.589315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.589504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.589562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.589897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.589956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.590293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.590353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.590535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.590596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.590890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.590949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.591213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.591273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.591423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.591478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.591680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.591734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.591863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.591907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.592004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.592031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.592125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.592151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.592352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.592405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.592605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.592652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.592870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.592932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.593224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.593249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.593426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.593494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.593750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.593808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.594073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.594132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.594362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.594419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.594611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.594658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.594788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.594827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.595018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.595065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.595250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.595303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.595506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.595532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.595676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.595703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.595895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.595921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.596097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.596122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.596273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.596334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.596590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.596655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.596837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.596904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.597163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.597221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.597393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.597456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.597741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.597801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.598075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.598136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.598346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.736 [2024-07-24 19:21:33.598407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.736 qpair failed and we were unable to recover it. 00:24:27.736 [2024-07-24 19:21:33.598564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.598624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.598915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.598973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.599165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.599223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.599399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.599473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.599821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.599879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.600049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.600108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.600291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.600334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.600474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.600506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.600768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.600826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.601168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.601225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.601501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.601560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.601825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.601883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.602226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.602284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.602466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.602565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.602704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.602749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.602890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.602945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.603232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.603289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.603492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.603565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.603868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.603926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.604111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.604172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.604361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.604431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.604711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.604769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.605011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.605070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.605340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.605398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.605686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.605712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.605906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.605964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.606249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.606307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.606594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.606652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.606845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.606916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.607102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.607148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.607333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.607387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.737 qpair failed and we were unable to recover it. 00:24:27.737 [2024-07-24 19:21:33.607586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.737 [2024-07-24 19:21:33.607636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.607889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.607939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.608041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.608067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.608223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.608284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.608414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.608503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.608683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.608733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.608902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.608950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.609044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.609070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.609203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.609273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.609416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.609466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.609586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.609615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.609832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.609896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.610068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.610146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.610417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.610475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.610770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.610828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.611015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.611085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.611263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.611327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.611625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.611652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.611982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.612039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.612263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.612321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.612569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.612630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.612972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.613030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.613223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.613250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.613497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.613549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.613734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.613781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.613976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.614025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.614148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.614207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.614304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.614330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.614506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.614533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.614795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.614859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.615040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.615105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.615290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.615355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.615538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.615598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.615896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.615954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.616107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.616158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.616349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.616402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.738 [2024-07-24 19:21:33.616513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.738 [2024-07-24 19:21:33.616541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.738 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.616649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.616674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.616852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.616930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.617124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.617179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.617409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.617458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.617627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.617655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.617917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.617974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.618154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.618222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.618410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.618505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.618754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.618812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.619104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.619130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.619423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.619517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.619685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.619744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.620040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.620098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.620287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.620343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.620527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.620596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.620895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.620953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.621175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.621234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.621572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.621654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.621838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.621906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.622172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.622229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.622406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.622464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.622683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.622740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.622921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.622989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.623278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.623335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.623499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.623558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.623830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.623888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.624064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.624128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.624429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.624509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.624762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.624819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.625024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.625070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.625206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.625287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.625430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.625487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.625690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.625738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.625916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.625942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.626114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.626164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.626433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.739 [2024-07-24 19:21:33.626491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.739 qpair failed and we were unable to recover it. 00:24:27.739 [2024-07-24 19:21:33.626730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.626779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.626908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.626967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.627121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.627174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.627450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.627533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.627796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.627855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.628209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.628269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.628547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.628618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.628886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.628944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.629131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.629198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.629510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.629570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.629747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.629794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.629930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.629984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.630180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.630251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.630546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.630572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.630833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.630889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.631239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.631296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.631535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.631597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.631860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.631910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.632068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.632128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.632327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.632375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.632578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.632630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.632783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.632810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.633005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.633054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.633274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.633335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.633495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.633549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.633737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.633808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.634093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.634152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.634334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.634403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.634650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.634676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.634913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.634962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.635126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.635181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.635317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.635365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.635597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.635647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.635749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.635781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.635965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.636015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.636192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.740 [2024-07-24 19:21:33.636217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.740 qpair failed and we were unable to recover it. 00:24:27.740 [2024-07-24 19:21:33.636367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.636430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.636604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.636654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.636747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.636772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.636939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.636989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.637168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.637220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.637453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.637538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.637683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.637709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.637992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.638050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.638232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.638301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.638609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.638668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.638966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.638992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.639235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.639295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.639606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.639666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.639974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.640032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.640228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.640298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.640552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.640579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.640912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.640970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.641165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.641236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.641564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.641614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.641822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.641895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.642094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.642168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.642426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.642496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.642710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.642736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.642915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.642986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.643189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.643245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.643413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.643477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.643658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.643712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.643902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.643961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.644093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.644145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.644290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.644340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.644525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.644574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.644748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.644797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.644915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.644973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.645106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.645161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.645379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.645432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.645606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.645660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.741 qpair failed and we were unable to recover it. 00:24:27.741 [2024-07-24 19:21:33.645787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.741 [2024-07-24 19:21:33.645840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.645995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.646064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.646207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.646262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.646416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.646442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.646633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.646683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.646931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.646979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.647122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.647170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.647367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.647415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.647596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.647650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.647840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.647904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.648100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.648172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.648363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.648434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.648630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.648702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.648891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.648960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.649276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.649345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.649545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.649572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.649758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.649816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.650012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.650083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.650385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.650441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.650639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.650712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.650899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.650954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.651079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.651134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.651226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.651251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.651439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.651494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.651619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.651674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.651809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.651859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.652006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.652033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.652279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.652331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.652494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.652554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.652650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.652675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.652820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.652884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.653015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.653095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.742 qpair failed and we were unable to recover it. 00:24:27.742 [2024-07-24 19:21:33.653313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.742 [2024-07-24 19:21:33.653363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.653508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.653556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.653710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.653737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.653928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.653992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.654166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.654211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.654376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.654443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.654672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.654730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.654926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.654975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.655094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.655146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.655307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.655365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.655543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.655591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.655745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.655771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.655909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.655959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.656053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.656079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.656194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.656250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.656513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.656567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.656722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.656774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.656923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.656986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.657106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.657157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.657317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.657378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.657554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.657608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.657735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.657782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.657910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.657963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.658081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.658134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.658304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.658359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.658546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.658577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.658730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.658758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.658913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.658978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.659128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.659193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.659341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.659391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.659564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.659611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.659777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.659829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.660007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.660073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.660278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.660363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.660516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.660543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.660665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.660723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.743 [2024-07-24 19:21:33.660817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.743 [2024-07-24 19:21:33.660842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.743 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.660946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.660973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.661166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.661218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.661347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.661428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.661528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.661554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.661648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.661673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.661877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.661928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.662118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.662170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.662325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.662390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.662503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.662531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.662668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.662710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.662809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.662835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.662971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.663044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.663234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.663297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.663516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.663566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.663664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.663690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.663835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.663901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.664038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.664091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.664262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.664326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.664511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.664580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.664765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.664833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.665030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.665096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.665276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.665342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.665591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.665639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.665836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.665895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.666165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.666225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.666423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.666509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.666679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.666745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.666956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.667012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.667146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.667189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.667290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.667319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.667423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.667450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.667613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.667677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.667885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.667935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.668127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.668181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.668312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.668364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.668554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.668607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.668736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.744 [2024-07-24 19:21:33.668786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.744 qpair failed and we were unable to recover it. 00:24:27.744 [2024-07-24 19:21:33.668911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.668960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.669122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.669178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.669299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.669349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.669520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.669579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.669713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.669760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.669916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.669945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.670165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.670222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.670421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.670474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.670583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.670610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.670766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.670828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.670966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.671019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.671185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.671237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.671440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.671504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.671625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.671680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.671809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.671853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.671954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.671980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.672076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.672106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.672263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.672328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.672428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.672455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.672612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.672640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.672817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.672883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.673070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.673140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.673307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.673364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.673525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.673553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.673700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.673766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.673909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.673959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.674059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.674086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.674209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.674257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.674401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.674454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.674610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.674653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.674918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.674976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.675159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.675229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.675401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.675459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.675630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.675688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.675885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.675943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.676140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.676208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.676363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.676417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.676636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.676707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.676871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.745 [2024-07-24 19:21:33.676940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.745 qpair failed and we were unable to recover it. 00:24:27.745 [2024-07-24 19:21:33.677195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.677265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.677394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.677461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.677657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.677717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.677883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.677946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.678120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.678148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.678253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.678281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.678446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.678502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.678653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.678719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.678833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.678882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.679033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.679099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.679232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.679282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.679411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.679464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.679593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.679646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.679792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.679866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.680044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.680108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.680384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.680443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.680646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.680716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.680901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.680966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.681232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.681291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.681476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.681508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.681791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.681852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.682043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.682113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.682305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.682374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.682571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.682598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.682778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.682831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.682965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.683013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.683154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.683206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.683378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.683430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.683612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.683666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.683830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.683880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.684009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.684059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.684181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.684240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.684392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.684421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.684558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.684601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.684722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.684780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.684950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.684998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.685137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.685165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.685389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.685448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.685643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.685710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.746 qpair failed and we were unable to recover it. 00:24:27.746 [2024-07-24 19:21:33.685999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.746 [2024-07-24 19:21:33.686057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.686230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.686292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.686519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.686585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.686773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.686801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.686918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.686976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.687115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.687168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.687324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.687390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.687497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.687524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.687655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.687682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.687829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.687895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.687990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.688015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.688205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.688256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.688456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.688512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.688712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.688764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.688895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.688945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.689092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.689155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.689331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.689386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.689564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.689615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.689716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.689742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.689929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.689983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.690083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.690110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.690298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.690326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.690451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.690505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.690658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.690684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.690785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.690815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.690951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.690994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.691094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.691121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.691216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.691242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.691335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.691360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.691475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.691545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.691693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.691759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.692018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.692077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.692311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.692341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.692477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.692524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.692689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.692754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.693021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.747 [2024-07-24 19:21:33.693079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.747 qpair failed and we were unable to recover it. 00:24:27.747 [2024-07-24 19:21:33.693374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.748 [2024-07-24 19:21:33.693433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.748 qpair failed and we were unable to recover it. 00:24:27.748 [2024-07-24 19:21:33.693619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.748 [2024-07-24 19:21:33.693685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.748 qpair failed and we were unable to recover it. 00:24:27.748 [2024-07-24 19:21:33.693903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.748 [2024-07-24 19:21:33.693963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.748 qpair failed and we were unable to recover it. 00:24:27.748 [2024-07-24 19:21:33.694138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.748 [2024-07-24 19:21:33.694196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.748 qpair failed and we were unable to recover it. 00:24:27.748 [2024-07-24 19:21:33.694376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.748 [2024-07-24 19:21:33.694423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.748 qpair failed and we were unable to recover it. 00:24:27.748 [2024-07-24 19:21:33.694589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.748 [2024-07-24 19:21:33.694651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.748 qpair failed and we were unable to recover it. 00:24:27.748 [2024-07-24 19:21:33.694837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.748 [2024-07-24 19:21:33.694888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.748 qpair failed and we were unable to recover it. 00:24:27.748 [2024-07-24 19:21:33.695062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.748 [2024-07-24 19:21:33.695128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.748 qpair failed and we were unable to recover it. 00:24:27.748 [2024-07-24 19:21:33.695291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.748 [2024-07-24 19:21:33.695348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.748 qpair failed and we were unable to recover it. 00:24:27.748 [2024-07-24 19:21:33.695527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.748 [2024-07-24 19:21:33.695579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.748 qpair failed and we were unable to recover it. 00:24:27.748 [2024-07-24 19:21:33.695832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.748 [2024-07-24 19:21:33.695891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:27.748 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.696067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.696125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.696305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.696365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.696508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.696563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.696663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.696689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.696780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.696806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.696932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.696980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.697109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.697196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.697322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.697369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.697499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.697544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.697734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.697782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.697996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.698059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.698270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.698328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.698500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.698574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.698749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.698811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.698989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.699045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.699211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.699260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.699365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.699394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.699553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.699615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.699742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.699787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.699916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.699963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.700111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.700138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.700318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.700371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.700530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.700558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.700745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.700796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.700915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.700967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.701065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.701100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.701267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.701319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-24 19:21:33.701450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-24 19:21:33.701508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.701640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.701724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.701821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.701847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.701972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.702018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.702227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.702274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.702420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.702469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.702634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.702697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.702836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.702884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.703054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.703102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.703282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.703339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.703452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.703513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.703630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.703680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.703811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.703861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.704046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.704097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.704302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.704353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.704530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.704581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.704678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.704705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.704825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.704883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.705008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.705054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.705183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.705267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.705369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.705396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.705507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.705536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.705668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.705718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.705864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.705891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.706091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.706145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.706276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.706330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.706474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.706527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.706739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.706799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.707045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.707103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.707364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.707425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.707667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.707695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.707843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.707907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.708085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-24 19:21:33.708111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-24 19:21:33.708266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.708327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.708457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.708519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.708654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.708703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.708832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.708874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.709023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.709090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.709288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.709352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.709551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.709608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.709712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.709739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.709868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.709916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.710062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.710089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.710205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.710254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.710386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.710469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.710627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.710692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.710874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.710932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.711112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.711181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.711356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.711382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.711537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.711605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.711779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.711822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.712070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.712128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.712294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.712351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.712534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.712593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.712861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.712922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.713136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.713196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.713367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.713427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.713611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.713679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.713861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.713918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.714102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.714170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.714458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.714538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.714690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.714752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.714943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.715001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.715151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.715196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.715455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.715537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-24 19:21:33.715675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-24 19:21:33.715746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.715920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.715979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.716157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.716183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.716415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.716475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.716672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.716723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.716918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.716969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.717129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.717188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.717315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.717363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.717511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.717576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.717714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.717762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.717864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.717891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.718024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.718067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.718197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.718283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.718447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.718533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.718668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.718722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.718910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.718958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.719047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.719073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.719170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.719196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.719373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.719423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.719580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.719631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.719747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.719797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.719930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.719976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.720117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.720160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.720311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.720339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.720496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.720556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.720740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.720797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.720956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.720995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.721221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.721289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.721455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.721525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.721768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.721825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.722023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.722080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.722250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.722315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.722476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.722530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.722690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-24 19:21:33.722748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-24 19:21:33.722928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.722986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.723116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.723201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.723391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.723443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.723575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.723623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.723720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.723746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.723864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.723914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.724099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.724155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.724253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.724279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.724377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.724404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.724541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.724612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.724788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.724840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.724967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.725015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.725117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.725145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.725275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.725320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.725421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.725447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.725640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.725699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.725827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.725870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.725988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.726035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.726163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.726243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.726362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.726420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.726555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.726608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.726733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.726801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.726948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.726998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.727100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.727129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.727301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.727351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.727517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.727573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.727711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.727764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.727879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.727929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.728051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.728094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.728222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.728304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.728403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.728431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.728534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.728561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.728682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.728730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.728918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.728975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-24 19:21:33.729099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-24 19:21:33.729147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-24 19:21:33.729334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-24 19:21:33.729386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-24 19:21:33.729522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-24 19:21:33.729572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-24 19:21:33.729702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-24 19:21:33.729744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-24 19:21:33.729866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-24 19:21:33.729916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-24 19:21:33.730060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-24 19:21:33.730107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-24 19:21:33.730249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-24 19:21:33.730293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-24 19:21:33.730462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-24 19:21:33.730501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-24 19:21:33.730642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-24 19:21:33.730690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-24 19:21:33.730821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-24 19:21:33.730869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-24 19:21:33.731003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-24 19:21:33.731051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-24 19:21:33.731145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-24 19:21:33.731171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-24 19:21:33.731286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-24 19:21:33.731338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-24 19:21:33.731465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-24 19:21:33.731521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-24 19:21:33.731690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-24 19:21:33.731737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-24 19:21:33.731874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-24 19:21:33.731921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-24 19:21:33.732113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.732162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.732288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.732371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.732501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.732547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.732662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.732711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.732829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.732875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.732973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.733000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.733131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.733212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.733383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.733440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.733609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.733670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.733829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.733893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.734075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.734126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.734252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.734299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.734432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.734472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.734683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.734736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.734841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.734870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.735020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.735080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.735214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.735259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.735385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.735426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.735534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.735561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.735726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.735790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.735942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.735967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.736211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.736267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.736440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.736516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.736667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.736734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.736955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.737008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.737164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.737227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.737325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.737352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.737510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.737538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.737679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.737705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.737828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.737873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-24 19:21:33.738023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-24 19:21:33.738087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.738214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.738296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.738424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.738463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.738647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.738696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.738866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.738919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.739012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.739038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.739185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.739256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.739544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.739572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.739724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.739788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.739975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.740035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.740281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.740341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.740556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.740608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.740781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.740808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.740942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.740992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.741147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.741209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.741390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.741445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.741590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.741641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.741785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.741834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.741938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.741965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.742061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.742087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.742250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.742323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.742491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.742555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.742735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.742794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.742969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.743035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.743211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.743281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.743453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.743532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.743695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.743741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.743907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.743972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.744130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.744159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.744332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.744386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.744488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.744515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.744645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.744690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.744815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.744869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.744995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.745044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.745229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-24 19:21:33.745273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-24 19:21:33.745428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.745506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.745675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.745733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.745921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.745986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.746154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.746219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.746376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.746434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.746557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.746584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.746711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.746756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.746888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.746941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.747072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.747123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.747240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.747291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.747428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.747511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.747689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.747740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.747872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.747954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.748085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.748129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.748246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.748294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.748414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.748463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.748571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.748599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.748725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.748793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.749023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.749051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.749188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.749237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.749406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.749456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.749593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.749676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.749804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.749877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.749992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.750018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.750187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.750238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.750384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.750414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.750512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.750539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.750669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.750708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.750818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.750870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.751046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.751111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.751356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.751382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.751486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.751512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.751659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.751722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.751884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.751948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.752119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-24 19:21:33.752145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-24 19:21:33.752322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.752385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.752548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.752577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.752706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.752747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.752865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.752921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.753049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.753117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.753304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.753357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.753540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.753590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.753720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.753799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.753896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.753922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.754033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.754081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.754221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.754284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.754447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.754504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.754636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.754684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.754818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.754866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.755066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.755118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.755314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.755340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.755514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.755560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.755697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.755750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.755906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.755962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.756145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.756201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.756364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.756388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.756567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.756634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.756791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.756855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.757054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.757127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.757300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.757359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.757549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.757606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.757784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.757833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.757986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.758052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.758156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.758182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.758302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.758368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.758514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.758595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.758726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.758781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.758904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.758971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.759111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.759166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.759355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.759404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-24 19:21:33.759514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-24 19:21:33.759553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.759694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.759776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.759917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.759943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.760066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.760113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.760248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.760330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.760474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.760551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.760708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.760737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.760862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.760931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.761066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.761111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.761219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.761246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.761393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.761447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.761599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.761627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.761774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.761825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.761950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.761989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.762117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.762162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.762284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.762332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.762463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.762522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.762719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.762746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.762907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.762970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.763117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.763185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.763357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.763414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.763596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.763661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.763810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.763886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.764127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.764185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.764345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.764383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.764573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.764641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.764800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.764838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.765021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.765046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.765222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.765283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.765541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.765602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.765768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.765826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.039 qpair failed and we were unable to recover it. 00:24:28.039 [2024-07-24 19:21:33.766024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.039 [2024-07-24 19:21:33.766081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.766193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.766221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.766385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.766434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.766582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.766648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.766765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.766810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.766943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.767025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.767206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.767262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.767409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.767475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.767604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.767657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.767778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.767843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.767963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.767990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.768167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.768192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.768326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.768354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.768584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.768636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.768766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.768806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.768986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.769041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.769164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.769211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.769350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.769408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.769602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.769656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.769918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.769968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.770100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.770144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.770240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.770265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.770464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.770517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.770666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.770728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.770828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.770854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.771035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.771085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.771217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.771300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.771417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.771463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.771671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.771725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.771850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.771897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.772042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.772069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.772201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.772286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.772389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.772416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.772613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.772665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.772809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.772876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.772975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.773001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.040 qpair failed and we were unable to recover it. 00:24:28.040 [2024-07-24 19:21:33.773162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.040 [2024-07-24 19:21:33.773211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.773370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.773434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.773629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.773698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.773861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.773887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.774154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.774212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.774459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.774533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.774691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.774747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.775075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.775132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.775280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.775338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.775512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.775539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.775755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.775819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.775987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.776044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.776288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.776345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.776620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.776688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.776816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.776871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.777066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.777115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.777297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.777351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.777486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.777534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.777632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.777659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.777840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.777888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.778071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.778119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.778306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.778357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.778537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.778568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.778756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.778809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.778925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.778982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.779176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.779223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.779355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.779407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.779512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.779538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.779685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.779713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.779843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.779869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.780022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.780082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.780255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.780282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.780515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.780574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.780834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.780894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.041 qpair failed and we were unable to recover it. 00:24:28.041 [2024-07-24 19:21:33.781092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.041 [2024-07-24 19:21:33.781152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.781391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.781416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.781603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.781665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.781828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.781881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.782122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.782175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.782297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.782365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.782540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.782604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.782708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.782734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.782835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.782867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.782996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.783078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.783178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.783204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.783372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.783420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.783519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.783546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.783645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.783670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.783770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.783801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.783956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.783986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.784090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.784118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.784212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.784238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.784337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.784365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.784531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.784557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.784677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.784723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.784820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.784847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.784985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.785039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.785216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.785241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.785364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.785410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.785507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.785533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.785666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.785718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.785834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.785882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.786073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.786128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.786269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.786316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.786501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.786528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.786708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.786758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.786858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.786884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.787030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.787057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.787199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.787248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.787378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.042 [2024-07-24 19:21:33.787459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.042 qpair failed and we were unable to recover it. 00:24:28.042 [2024-07-24 19:21:33.787610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.787670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.787871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.787926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.788039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.788098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.788224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.788304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.788434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.788524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.788666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.788718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.788849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.788891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.789023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.789102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.789230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.789279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.789394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.789442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.789644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.789692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.789815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.789868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.789991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.790058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.790269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.790320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.790455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.790511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.790702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.790760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.790902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.790949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.791125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.791151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.791326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.791375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.791505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.791546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.791680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.791728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.791849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.791896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.792082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.792132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.792249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.792297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.792451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.792477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.792674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.792723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.792899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.792925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.793127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.793193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.793381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.793433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.793574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.793618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.793746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.793794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.794045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.794096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.794214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.794267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.043 qpair failed and we were unable to recover it. 00:24:28.043 [2024-07-24 19:21:33.794407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.043 [2024-07-24 19:21:33.794458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.794563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.794591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.794802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.794843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.794989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.795016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.795163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.795227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.795323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.795348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.795465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.795524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.795679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.795740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.795840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.795867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.795987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.796035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.796155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.796200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.796294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.796320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.796444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.796534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.796788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.796837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.796973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.797055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.797236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.797286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.797408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.797455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.797592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.797674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.797795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.797837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.797953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.797998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.798127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.798167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.798306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.798333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.798500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.798553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.798749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.798798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.798897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.798925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.799058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.799138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.799259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.799304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.799531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.799599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.799833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.799892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.800095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.800154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.800326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.044 [2024-07-24 19:21:33.800385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.044 qpair failed and we were unable to recover it. 00:24:28.044 [2024-07-24 19:21:33.800586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.800634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.800754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.800802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.800971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.801023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.801207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.801257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.801506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.801553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.801701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.801754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.801929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.801981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.802126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.802191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.802310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.802371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.802537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.802564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.802755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.802806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.803003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.803055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.803148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.803173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.803359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.803406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.803507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.803535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.803732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.803779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.803965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.804015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.804266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.804322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.804540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.804569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.804754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.804814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.805009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.805059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.805256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.805307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.805526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.805553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.805739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.805793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.805923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.806004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.806103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.806129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.806254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.806335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.806465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.806556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.806741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.806792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.806980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.807029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.807158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.807237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.807336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.807363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.807524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.807581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.807709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.807758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.807929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.807956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.808097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.045 [2024-07-24 19:21:33.808149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.045 qpair failed and we were unable to recover it. 00:24:28.045 [2024-07-24 19:21:33.808328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.808379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.808478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.808517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.808648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.808733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.808912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.808972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.809162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.809210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.809426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.809472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.809666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.809726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.809849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.809906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.810055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.810082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.810209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.810275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.810392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.810418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.810561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.810615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.810710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.810737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.810837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.810862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.810993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.811075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.811204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.811289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.811463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.811525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.811653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.811701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.811831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.811874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.812046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.812096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.812279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.812327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.812512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.812540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.812705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.812780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.812967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.813015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.813115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.813141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.813299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.813358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.813519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.813546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.813696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.813754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.813888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.813941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.814132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.814179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.814302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.814369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.814562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.814590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.814743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.814807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.814934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.815015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.815151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.815230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.815331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.046 [2024-07-24 19:21:33.815356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.046 qpair failed and we were unable to recover it. 00:24:28.046 [2024-07-24 19:21:33.815555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.815613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.815761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.815829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.816006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.816070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.816334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.816392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.816608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.816635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.816815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.816841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.817022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.817079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.817264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.817321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.817502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.817553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.817688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.817735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.817920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.817970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.818168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.818220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.818331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.818413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.818551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.818603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.818727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.818794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.818991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.819038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.819243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.819293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.819427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.819514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.819754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.819808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.819993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.820043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.820166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.820213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.820356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.820383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.820517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.820567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.820697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.820780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.820905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.820957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.821109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.821174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.821352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.821404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.821577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.821630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.821751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.821800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.821936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.821988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.822176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.822230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.822346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.822399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.822592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.822657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.822811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.822879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.823111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.823165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.047 [2024-07-24 19:21:33.823327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.047 [2024-07-24 19:21:33.823390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.047 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.823598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.823650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.823844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.823897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.824087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.824138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.824330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.824381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.824576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.824638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.824920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.824978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.825151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.825213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.825466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.825525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.825725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.825775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.825926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.825990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.826169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.826220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.826323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.826352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.826517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.826576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.826728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.826792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.826964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.827012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.827198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.827250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.827378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.827438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.827601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.827650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.827877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.827926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.828029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.828055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.828148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.828174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.828319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.828367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.828463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.828500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.828666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.828723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.828864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.828892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.829111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.829172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.829367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.829433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.829652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.829711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.829983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.830041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.830215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.830280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.830539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.830589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.830769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.830796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.830948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.831008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.831216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.831266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.831463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.048 [2024-07-24 19:21:33.831520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.048 qpair failed and we were unable to recover it. 00:24:28.048 [2024-07-24 19:21:33.831626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.831655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.831812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.831868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.832033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.832059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.832188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.832213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.832340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.832365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.832488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.832519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.832673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.832735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.832938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.832987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.833202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.833250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.833433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.833496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.833691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.833739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.833896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.833955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.834146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.834195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.834295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.834323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.834464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.834534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.834672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.834753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.834984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.835034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.835203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.835255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.835425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.835486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.835598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.835624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.835769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.835795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.835924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.835974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.836150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.836176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.836344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.836409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.836599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.836668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.836923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.836980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.837154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.837215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.837422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.837505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.837755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.837813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.049 [2024-07-24 19:21:33.837983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.049 [2024-07-24 19:21:33.838046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.049 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.838288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.838348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.838512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.838572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.838791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.838849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.839104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.839162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.839422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.839492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.839667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.839692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.839868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.839895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.840168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.840225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.840473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.840538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.840730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.840789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.841077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.841136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.841383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.841453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.841645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.841671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.841845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.841888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.842050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.842101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.842286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.842338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.842504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.842549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.842723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.842771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.842959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.843011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.843166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.843223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.843379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.843439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.843651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.843706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.843867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.843919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.844128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.844191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.844364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.844425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.844733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.844790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.845050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.845108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.845305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.845377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.845570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.845644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.845898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.845942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.846161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.846220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.846476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.846559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.846715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.846778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.847016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.847077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.847253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.847279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.050 [2024-07-24 19:21:33.847422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.050 [2024-07-24 19:21:33.847507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.050 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.847648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.847698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.847832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.847882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.847999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.848067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.848190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.848238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.848388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.848465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.848687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.848739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.848842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.848869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.848989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.849043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.849242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.849298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.849507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.849561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.849705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.849732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.849863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.849911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.850033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.850090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.850284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.850334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.850504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.850548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.850707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.850765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.850976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.851025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.851214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.851265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.851472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.851551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.851729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.851769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.852020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.852074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.852261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.852310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.852477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.852531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.852677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.852705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.852885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.852944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.853098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.853177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.853281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.853307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.853542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.853570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.853758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.853808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.853960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.854002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.854265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.854323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.854500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.854544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.854780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.854839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.855083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.855139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.855304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.855351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.051 [2024-07-24 19:21:33.855537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.051 [2024-07-24 19:21:33.855565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.051 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.855716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.855777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.855982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.856036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.856133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.856158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.856284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.856326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.856512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.856563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.856729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.856780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.856962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.857013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.857157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.857209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.857393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.857440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.857610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.857674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.857863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.857912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.858066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.858130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.858303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.858354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.858504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.858569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.858725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.858784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.858924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.858977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.859174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.859225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.859362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.859404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.859556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.859615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.859786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.859844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.860019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.860072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.860220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.860248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.860414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.860499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.860694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.860755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.861045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.861102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.861301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.861373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.861637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.861688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.861838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.861900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.862084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.862132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.862279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.862305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.862496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.862547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.862715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.862776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.862934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.863007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.863300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.863357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.863551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.863577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.863673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.052 [2024-07-24 19:21:33.863699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.052 qpair failed and we were unable to recover it. 00:24:28.052 [2024-07-24 19:21:33.863833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.863883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.864080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.864136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.864303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.864356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.864547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.864595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.864693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.864719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.864925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.864978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.865191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.865241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.865423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.865472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.865651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.865709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.865858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.865920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.866089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.866143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.866310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.866361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.866559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.866617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.866795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.866841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.867024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.867081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.867230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.867257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.867417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.867466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.867622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.867650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.867887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.867951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.868180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.868238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.868536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.868562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.868777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.868834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.869020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.869087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.869284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.869344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.869501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.869584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.869756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.869783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.869947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.870003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.870162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.870188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.870327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.870372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.870530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.870559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.870687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.870745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.870942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.870993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.871180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.871235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.871429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.871477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.871648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.053 [2024-07-24 19:21:33.871674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.053 qpair failed and we were unable to recover it. 00:24:28.053 [2024-07-24 19:21:33.871857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.871911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.872048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.872101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.872271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.872322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.872505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.872552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.872759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.872806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.872968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.873026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.873174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.873221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.873404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.873460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.873670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.873736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.874039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.874105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.874377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.874437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.874653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.874714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.874946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.875006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.875250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.875308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.875558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.875584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.875799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.875825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.875997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.876072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.876338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.876397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.876658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.876717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.876896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.876963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.877293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.877351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.877546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.877572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.877773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.877832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.878041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.878100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.878286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.878346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.878543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.878606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.878864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.878917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.879056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.879107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.879346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.879393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.879553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.879580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.879718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.054 [2024-07-24 19:21:33.879759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.054 qpair failed and we were unable to recover it. 00:24:28.054 [2024-07-24 19:21:33.879914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.879974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.880145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.880201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.880393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.880442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.880661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.880713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.880877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.880905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.881150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.881208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.881430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.881551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.881817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.881875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.882058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.882126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.882417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.882474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.882697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.882755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.882927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.882990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.883263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.883329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.883564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.883623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.883868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.883926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.884108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.884154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.884395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.884452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.884731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.884790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.885047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.885075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.885219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.885271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.885492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.885537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.885639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.885665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.885852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.885878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.886053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.886105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.886282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.886331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.886510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.886556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.886729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.886782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.886919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.886976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.887219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.887269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.887364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.887390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.887539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.887589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.887722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.887772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.887907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.055 [2024-07-24 19:21:33.887958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.055 qpair failed and we were unable to recover it. 00:24:28.055 [2024-07-24 19:21:33.888224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.888287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.888576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.888639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.888868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.888925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.889104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.889168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.889367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.889427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.889601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.889631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.889827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.889878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.890073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.890121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.890272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.890335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.890470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.890556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.890740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.890766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.890951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.891004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.891161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.891220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.891316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.891342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.891540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.891567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.891735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.891790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.891967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.891993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.892141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.892191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.892347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.892405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.892534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.892588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.892819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.892882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.893070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.893131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.893428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.893500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.893796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.893822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.894044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.894103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.894311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.894369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.056 qpair failed and we were unable to recover it. 00:24:28.056 [2024-07-24 19:21:33.894551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.056 [2024-07-24 19:21:33.894602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.894780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.894841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.895051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.895124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.895452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.895533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.895721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.895783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.896017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.896077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.896372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.896430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.896664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.896727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.896947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.896974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.897159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.897217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.897433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.897489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.897590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.897615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.897716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.897743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.897896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.897969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.898126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.898182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.898495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.898538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.898725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.898786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.899048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.899106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.899301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.899373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.899676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.899735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.899920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.899984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.900174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.900238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.900421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.900464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.900753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.900811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.901000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.901069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.901268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.901338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.057 qpair failed and we were unable to recover it. 00:24:28.057 [2024-07-24 19:21:33.901604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.057 [2024-07-24 19:21:33.901663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.902001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.902058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.902240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.902286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.902521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.902566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.902830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.902874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.903038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.903097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.903242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.903297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.903509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.903552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.903664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.903691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.903803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.903866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.903963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.903989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.904148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.904209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.904441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.904514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.904703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.904728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.904956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.905014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.905285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.905346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.905553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.905580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.905800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.905858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.906034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.906092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.906361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.906422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.906633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.906693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.906929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.906998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.907246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.907305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.907541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.907567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.907795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.907854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.908038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.908097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.058 [2024-07-24 19:21:33.908322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.058 [2024-07-24 19:21:33.908348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.058 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.908508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.908537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.908672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.908721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.908935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.908990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.909186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.909236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.909464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.909541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.909728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.909786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.910046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.910103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.910289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.910346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.910517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.910544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.910688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.910739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.910900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.910927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.911076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.911138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.911304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.911329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.911459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.911522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.911641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.911698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.911834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.911888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.912050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.912096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.912254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.912304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.912453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.912509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.912640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.912691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.912821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.912870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.913012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.913067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.913238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.913290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.913448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.913516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.913746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.913793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.914008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.059 [2024-07-24 19:21:33.914072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.059 qpair failed and we were unable to recover it. 00:24:28.059 [2024-07-24 19:21:33.914335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.060 [2024-07-24 19:21:33.914361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.060 qpair failed and we were unable to recover it. 00:24:28.060 [2024-07-24 19:21:33.914610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.060 [2024-07-24 19:21:33.914665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.060 qpair failed and we were unable to recover it. 00:24:28.060 [2024-07-24 19:21:33.914817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.060 [2024-07-24 19:21:33.914871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.060 qpair failed and we were unable to recover it. 00:24:28.060 [2024-07-24 19:21:33.915030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.060 [2024-07-24 19:21:33.915085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.060 qpair failed and we were unable to recover it. 00:24:28.060 [2024-07-24 19:21:33.915301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.060 [2024-07-24 19:21:33.915350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.060 qpair failed and we were unable to recover it. 00:24:28.060 [2024-07-24 19:21:33.915504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.060 [2024-07-24 19:21:33.915531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.060 qpair failed and we were unable to recover it. 00:24:28.060 [2024-07-24 19:21:33.915709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.060 [2024-07-24 19:21:33.915759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.060 qpair failed and we were unable to recover it. 00:24:28.060 [2024-07-24 19:21:33.915903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.060 [2024-07-24 19:21:33.915953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.060 qpair failed and we were unable to recover it. 00:24:28.060 [2024-07-24 19:21:33.916108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.060 [2024-07-24 19:21:33.916160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.060 qpair failed and we were unable to recover it. 00:24:28.060 [2024-07-24 19:21:33.916321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.060 [2024-07-24 19:21:33.916379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.060 qpair failed and we were unable to recover it. 00:24:28.060 [2024-07-24 19:21:33.916573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.060 [2024-07-24 19:21:33.916636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.060 qpair failed and we were unable to recover it. 00:24:28.060 [2024-07-24 19:21:33.916860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.060 [2024-07-24 19:21:33.916887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.060 qpair failed and we were unable to recover it. 00:24:28.060 [2024-07-24 19:21:33.917080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.060 [2024-07-24 19:21:33.917140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.060 qpair failed and we were unable to recover it. 00:24:28.060 [2024-07-24 19:21:33.917350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.060 [2024-07-24 19:21:33.917401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.060 qpair failed and we were unable to recover it. 00:24:28.060 [2024-07-24 19:21:33.917679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.060 [2024-07-24 19:21:33.917738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.060 qpair failed and we were unable to recover it. 00:24:28.060 [2024-07-24 19:21:33.918001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.060 [2024-07-24 19:21:33.918059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.060 qpair failed and we were unable to recover it. 00:24:28.060 [2024-07-24 19:21:33.918306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.060 [2024-07-24 19:21:33.918331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.060 qpair failed and we were unable to recover it. 00:24:28.060 [2024-07-24 19:21:33.918553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.060 [2024-07-24 19:21:33.918580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.060 qpair failed and we were unable to recover it. 00:24:28.060 [2024-07-24 19:21:33.918764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.060 [2024-07-24 19:21:33.918821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.060 qpair failed and we were unable to recover it. 00:24:28.060 [2024-07-24 19:21:33.919062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.060 [2024-07-24 19:21:33.919122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.060 qpair failed and we were unable to recover it. 00:24:28.060 [2024-07-24 19:21:33.919432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.060 [2024-07-24 19:21:33.919501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.919687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.919747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.919943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.920002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.920185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.920242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.920425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.920509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.920685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.920743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.921003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.921028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.921244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.921301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.921456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.921521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.921714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.921772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.921982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.922034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.922232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.922290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.922432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.922492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.922668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.922719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.922813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.922839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.923029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.923063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.923191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.923243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.923398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.923426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.923689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.923748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.923971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.924032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.924325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.924384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.924615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.924673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.924864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.924923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.925181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.925239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.925502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.925561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.925798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.061 [2024-07-24 19:21:33.925856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.061 qpair failed and we were unable to recover it. 00:24:28.061 [2024-07-24 19:21:33.926042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.926099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.926357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.926414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.926668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.926694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.926976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.927035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.927263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.927321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.927524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.927586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.927743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.927804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.928100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.928159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.928413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.928470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.928666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.928692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.929000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.929057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.929244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.929301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.929464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.929498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.929674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.929725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.929909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.929956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.930136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.930188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.930310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.930373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.930535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.930563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.930730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.930787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.930963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.931014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.931195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.931244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.931441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.931468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.931710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.931762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.931953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.932008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.932164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.932217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.932344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.062 [2024-07-24 19:21:33.932397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.062 qpair failed and we were unable to recover it. 00:24:28.062 [2024-07-24 19:21:33.932554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.932580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.932676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.932703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.932888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.932943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.933131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.933180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.933342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.933399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.933577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.933639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.933819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.933878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.934102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.934160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.934431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.934502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.934685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.934711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.934864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.934920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.935122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.935179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.935329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.935382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.935561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.935620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.935851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.935907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.936094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.936154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.936446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.936532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.936721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.936781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.936953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.937011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.937205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.937264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.937478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.937561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.937730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.937758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.937888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.937946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.938109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.938134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.938271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.938324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.938419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.938445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.063 [2024-07-24 19:21:33.938623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.063 [2024-07-24 19:21:33.938685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.063 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.938912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.938971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.939119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.939166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.939420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.939494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.939698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.939768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.939918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.939970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.940157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.940215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.940394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.940451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.940648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.940707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.940900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.940961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.941137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.941196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.941363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.941420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.941708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.941767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.941944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.942002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.942265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.942323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.942555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.942615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.942881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.942941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.943124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.943183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.943401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.943432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.943588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.943643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.943802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.943828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.943994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.944045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.944167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.944236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.944364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.944420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.944521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.944548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.944659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.944686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.944785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.064 [2024-07-24 19:21:33.944811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.064 qpair failed and we were unable to recover it. 00:24:28.064 [2024-07-24 19:21:33.944966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.945020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.945201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.945253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.945357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.945384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.945489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.945516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.945619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.945653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.945756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.945781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.945898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.945924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.946017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.946042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.946177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.946227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.946373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.946415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.946626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.946688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.946916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.946974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.947205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.947263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.947552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.947614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.947811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.947872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.948031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.948090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.948353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.948410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.948650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.948712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.948910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.948964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.949137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.949186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.949392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.949434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.949575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.949602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.949714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.949739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.949843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.949869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.949963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.949989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.950081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.065 [2024-07-24 19:21:33.950107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.065 qpair failed and we were unable to recover it. 00:24:28.065 [2024-07-24 19:21:33.950204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.950230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.950328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.950355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.950491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.950529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.950632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.950658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.950781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.950835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.950933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.950964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.951131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.951157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.951340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.951395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.951521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.951563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.951781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.951832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.952041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.952092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.952253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.952306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.952507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.952536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.952700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.952750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.952848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.952875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.953100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.953163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.953353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.953414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.953628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.953688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.953881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.953940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.954140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.954190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.954344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.954409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.954556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.954609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.954793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.954849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.954988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.955041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.955162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.955218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.955372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.955413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.955671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.955744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.955932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.955992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.956174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.956234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.956418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.066 [2024-07-24 19:21:33.956477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.066 qpair failed and we were unable to recover it. 00:24:28.066 [2024-07-24 19:21:33.956771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.956828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.957019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.957079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.957261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.957331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.957522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.957581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.957788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.957845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.957978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.958030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.958152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.958200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.958297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.958324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.958505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.958540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.958692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.958752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.958892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.958944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.959072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.959152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.959322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.959373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.959504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.959558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.959677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.959731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.959856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.959909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.960067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.960113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.960233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.960295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.960467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.960531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.960744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.960793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.960983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.961035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.961164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.961218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.961347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.961399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.961560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.067 [2024-07-24 19:21:33.961613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.067 qpair failed and we were unable to recover it. 00:24:28.067 [2024-07-24 19:21:33.961759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.961785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.961984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.962038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.962139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.962165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.962334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.962388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.962556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.962608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.962780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.962840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.962995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.963022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.963215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.963265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.963417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.963469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.963642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.963697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.963867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.963932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.964120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.964179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.964378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.964404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.964604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.964662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.964839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.964897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.965077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.965135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.965315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.965373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.965590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.965645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.965846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.965915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.966080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.966132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.966266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.966319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.966510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.966559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.966741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.966792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.966933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.966990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.967183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.967235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.967378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.967430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.967608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.967662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.967827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.967854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.968007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.968033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.968156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.968213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.968399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.968448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.068 qpair failed and we were unable to recover it. 00:24:28.068 [2024-07-24 19:21:33.968589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.068 [2024-07-24 19:21:33.968635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.968783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.968836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.968979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.969028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.969131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.969157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.969336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.969402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.969570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.969621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.969799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.969856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.970005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.970064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.970239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.970296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.970468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.970539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.970800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.970854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.971019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.971074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.971215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.971268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.971396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.971477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.971586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.971620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.971799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.971852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.971951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.971977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.972148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.972197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.972376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.972434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.972534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.972610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.972877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.972936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.973161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.973220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.973522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.973582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.973757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.973815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.974021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.974089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.974283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.974338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.974446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.974476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.974588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.974616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.974796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.974855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.975078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.975133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.975233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.975260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.975390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.069 [2024-07-24 19:21:33.975439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.069 qpair failed and we were unable to recover it. 00:24:28.069 [2024-07-24 19:21:33.975599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.975655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.975805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.975859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.975957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.975983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.976106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.976161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.976290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.976343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.976468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.976534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.976638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.976665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.976808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.976834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.976947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.976973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.977102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.977154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.977277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.977333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.977527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.977564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.977735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.977783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.977976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.978026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.978166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.978218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.978394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.978441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.978559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.978589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.978716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.978773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.978937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.978988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.979089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.979116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.979269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.979295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.979458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.979519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.979664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.979723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.979883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.979937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.980132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.980197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.980465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.980540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.980710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.980770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.980959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.981019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.981280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.981338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.981565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.981594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.981788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.981836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.981976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.982029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.982162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.982215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.982359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.982413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.070 [2024-07-24 19:21:33.982512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.070 [2024-07-24 19:21:33.982539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.070 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.982636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.982662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.982866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.982915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.983053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.983109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.983276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.983341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.983535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.983598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.983766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.983819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.983918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.983943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.984093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.984144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.984302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.984329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.984513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.984562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.984803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.984864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.985092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.985149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.985395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.985453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.985654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.985681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.985863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.985919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.986036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.986097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.986226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.986282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.986429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.986489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.986588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.986615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.986711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.986736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.986948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.986998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.987188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.987237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.987378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.987430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.987598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.987650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.987779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.987835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.987997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.988050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.988201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.988252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.988353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.988381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.988499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.988528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.988727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.988777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.988971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.989022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.989169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.989220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.989417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.989471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.989675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.989723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.989872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.989926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.071 [2024-07-24 19:21:33.990064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.071 [2024-07-24 19:21:33.990115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.071 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.990235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.990288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.990442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.990506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.990634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.990687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.990807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.990862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.991023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.991075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.991231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.991284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.991385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.991411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.991514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.991543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.991699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.991753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.991984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.992045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.992272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.992330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.992528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.992595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.992859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.992918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.993148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.993207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.993386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.993443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.993610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.993638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.993786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.993841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.993941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.993968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.994147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.994202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.994349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.994402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.994549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.994603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.994741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.994793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.994961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.995011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.995166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.995222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.995456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.995536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.995737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.995792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.995911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.995970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.996148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.996200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.996346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.996394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.996500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.996528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.072 [2024-07-24 19:21:33.996724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.072 [2024-07-24 19:21:33.996774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.072 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:33.996916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:33.996970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:33.997121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:33.997175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:33.997295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:33.997346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:33.997546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:33.997572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:33.997717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:33.997768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:33.997922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:33.997974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:33.998069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:33.998094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:33.998250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:33.998303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:33.998454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:33.998516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:33.998653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:33.998705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:33.998837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:33.998890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:33.998993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:33.999019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:33.999224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:33.999274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:33.999431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:33.999499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:33.999664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:33.999718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:33.999858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:33.999911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:34.000053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:34.000107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:34.000228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:34.000285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:34.000434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:34.000492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:34.000592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:34.000618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:34.000805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:34.000854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:34.001014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:34.001071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:34.001170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:34.001196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:34.001360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:34.001414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:34.001606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:34.001654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:34.001776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:34.001834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:34.001997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:34.002049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:34.002233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:34.002285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:34.002431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:34.002495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:34.002671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:34.002720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:34.002821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:34.002849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:34.003009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:34.003035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:34.003197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:34.003251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:34.003355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:34.003383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:34.003519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:34.003562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.073 [2024-07-24 19:21:34.003660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.073 [2024-07-24 19:21:34.003686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.073 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.003879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.003930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.004072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.004124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.004256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.004308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.004427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.004492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.004686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.004734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.004881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.004938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.005085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.005137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.005296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.005325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.005425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.005452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.005640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.005688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.005859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.005915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.006031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.006086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.006220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.006275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.006453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.006510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.006641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.006692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.006857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.006884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.007004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.007061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.007189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.007244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.007343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.007370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.007552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.007582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.007776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.007827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.008040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.008091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.008218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.008270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.008373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.008400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.008579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.008633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.008774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.008828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.008982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.009036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.009153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.009206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.009374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.009427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.009566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.009623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.009721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.009746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.009848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.009874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.010030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.010083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.010266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.010320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.010427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.010454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.010618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.074 [2024-07-24 19:21:34.010677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.074 qpair failed and we were unable to recover it. 00:24:28.074 [2024-07-24 19:21:34.010811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.010865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.010967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.010992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.011129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.011181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.011307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.011357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.011543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.011571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.011673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.011701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.011821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.011880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.012006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.012035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.012197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.012250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.012404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.012458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.012573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.012600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.012826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.012880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.013025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.013077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.013247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.013299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.013391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.013417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.013556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.013611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.013737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.013790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.013906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.013961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.014111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.014163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.014297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.014349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.014501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.014554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.014702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.014753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.014903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.014956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.015146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.015201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.015375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.015430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.015610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.015665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.015859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.015907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.016004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.016030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.016177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.016232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.016378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.016432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.016560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.016612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.016713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.016739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.016887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.016940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.017040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.017067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.017177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.017203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.017349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.075 [2024-07-24 19:21:34.017403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.075 qpair failed and we were unable to recover it. 00:24:28.075 [2024-07-24 19:21:34.017535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.017562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.076 [2024-07-24 19:21:34.017662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.017688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.076 [2024-07-24 19:21:34.017858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.017887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.076 [2024-07-24 19:21:34.018032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.018086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.076 [2024-07-24 19:21:34.018185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.018212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.076 [2024-07-24 19:21:34.018333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.018394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.076 [2024-07-24 19:21:34.018524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.018587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.076 [2024-07-24 19:21:34.018808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.018854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.076 [2024-07-24 19:21:34.018996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.019049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.076 [2024-07-24 19:21:34.019179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.019232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.076 [2024-07-24 19:21:34.019376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.019432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.076 [2024-07-24 19:21:34.019632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.019660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.076 [2024-07-24 19:21:34.019841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.019890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.076 [2024-07-24 19:21:34.020037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.020089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.076 [2024-07-24 19:21:34.020230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.020286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.076 [2024-07-24 19:21:34.020440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.020501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.076 [2024-07-24 19:21:34.020658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.020713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.076 [2024-07-24 19:21:34.020882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.020930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.076 [2024-07-24 19:21:34.021040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.021101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.076 [2024-07-24 19:21:34.021281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.021333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.076 [2024-07-24 19:21:34.021528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.021555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.076 [2024-07-24 19:21:34.021658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.021684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.076 [2024-07-24 19:21:34.021839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.076 [2024-07-24 19:21:34.021896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.076 qpair failed and we were unable to recover it. 00:24:28.363 [2024-07-24 19:21:34.022050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.363 [2024-07-24 19:21:34.022104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.363 qpair failed and we were unable to recover it. 00:24:28.363 [2024-07-24 19:21:34.022248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.363 [2024-07-24 19:21:34.022305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.363 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.022399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.022425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.022610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.022663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.022848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.022879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.023093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.023142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.023260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.023311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.023447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.023512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.023683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.023735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.023835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.023862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.024070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.024124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.024277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.024306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.024516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.024559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.024687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.024745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.024908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.024956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.025102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.025157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.025290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.025340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.025433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.025459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.025665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.025719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.025891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.025941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.026088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.026141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.026372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.026422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.026554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.026605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.026734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.026785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.026934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.026960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.027162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.027212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.027327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.027383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.027532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.027591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.027691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.027718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.027849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.027902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.028000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.028027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.028144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.028206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.028303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.028330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.028523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.028549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.028692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.028748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.028885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.028938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.029118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.029167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.364 [2024-07-24 19:21:34.029318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.364 [2024-07-24 19:21:34.029345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.364 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.029450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.029493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.029644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.029700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.029850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.029911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.030148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.030197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.030329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.030381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.030528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.030580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.030774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.030827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.031025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.031078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.031270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.031296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.031496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.031538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.031664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.031718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.031861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.031913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.032061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.032115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.032248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.032300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.032391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.032416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.032593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.032644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.032833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.032887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.033009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.033062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.033241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.033295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.033497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.033554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.033712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.033739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.033931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.033982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.034148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.034198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.034329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.034383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.034590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.034641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.034820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.034872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.035027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.035081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.035320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.035370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.035517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.035569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.035785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.035834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.035987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.036041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.036160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.036212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.036335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.036389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.036540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.036607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.036744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.036795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.365 qpair failed and we were unable to recover it. 00:24:28.365 [2024-07-24 19:21:34.036942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.365 [2024-07-24 19:21:34.037002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.037234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.037287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.037410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.037473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.037667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.037717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.037839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.037892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.038008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.038066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.038240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.038291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.038494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.038550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.038707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.038760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.038918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.038958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.039117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.039169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.039363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.039390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.039601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.039651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.039778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.039830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.040015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.040069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.040257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.040310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.040414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.040440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.040610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.040662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.040801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.040855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.040996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.041049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.041220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.041272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.041416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.041467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.041668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.041718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.041946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.041995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.042157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.042205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.042308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.042337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.042539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.042568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.042752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.042805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.042930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.042984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.043190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.043239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.043392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.043419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.043618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.043669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.043865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.043913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.044032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.044088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.044186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.044212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.044441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.044500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.366 [2024-07-24 19:21:34.044673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.366 [2024-07-24 19:21:34.044726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.366 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.044847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.044900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.045048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.045100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.045278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.045329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.045445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.045505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.045619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.045675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.045838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.045888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.046019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.046071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.046183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.046243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.046412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.046464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.046574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.046602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.046753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.046802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.046936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.046963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.047066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.047093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.047194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.047221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.047353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.047404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.047523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.047557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.047766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.047818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.047940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.047993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.048148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.048175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.048273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.048298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.048421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.048474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.048600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.048654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.048825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.048873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.049019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.049072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.049204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.049256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.049421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.049471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.049592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.049620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.049818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.049868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.049995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.050050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.050230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.050312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.050510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.050538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.050689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.050742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.050913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.050966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.051122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.051179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.051303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.051362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.051545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.367 [2024-07-24 19:21:34.051572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.367 qpair failed and we were unable to recover it. 00:24:28.367 [2024-07-24 19:21:34.051708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.051759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.051920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.051977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.052145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.052196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.052325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.052378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.052578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.052629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.052792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.052849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.052993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.053050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.053184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.053239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.053394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.053421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.053533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.053593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.053740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.053792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.053976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.054024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.054193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.054248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.054417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.054444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.054655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.054704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.054925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.054972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.055120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.055171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.055323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.055349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.055512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.055558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.055709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.055761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.055891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.055945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.056041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.056067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.056251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.056300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.056430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.056494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.056671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.056728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.056831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.056857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.057001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.057055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.057192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.057243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.057412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.057468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.057642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.368 [2024-07-24 19:21:34.057699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.368 qpair failed and we were unable to recover it. 00:24:28.368 [2024-07-24 19:21:34.057825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.057883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.057988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.058015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.058220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.058272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.058462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.058537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.058665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.058719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.058837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.058899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.059058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.059106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.059244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.059302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.059443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.059503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.059693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.059744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.059915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.059962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.060151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.060199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.060318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.060370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.060542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.060570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.060715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.060762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.060888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.060937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.061072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.061124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.061297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.061347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.061534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.061561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.061713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.061738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.061880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.061931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.062128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.062179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.062382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.062436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.062604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.062660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.062869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.062924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.063081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.063134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.063294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.063346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.063585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.063634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.063786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.063839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.063931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.063956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.064081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.064137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.064335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.064389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.064526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.064579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.064713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.064766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.064945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.065000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.065141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.369 [2024-07-24 19:21:34.065193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.369 qpair failed and we were unable to recover it. 00:24:28.369 [2024-07-24 19:21:34.065313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.065366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.065563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.065613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.065738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.065797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.065953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.065980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.066176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.066202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.066344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.066398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.066499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.066525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.066711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.066766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.067004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.067052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.067237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.067264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.067460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.067515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.067732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.067785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.067987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.068040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.068226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.068252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.068423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.068474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.068626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.068679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.068837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.068892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.069080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.069131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.069249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.069308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.069461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.069520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.069719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.069770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.069976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.070008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.070162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.070214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.070346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.070399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.070500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.070527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.070658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.070712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.070900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.070950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.071100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.071154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.071333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.071385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.071515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.071567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.071672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.071698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.071830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.071882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.072062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.072111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.072261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.072317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.072496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.072548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.072704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.370 [2024-07-24 19:21:34.072758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.370 qpair failed and we were unable to recover it. 00:24:28.370 [2024-07-24 19:21:34.072858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.072885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.073028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.073080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.073290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.073343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.073506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.073553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.073695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.073747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.073889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.073941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.074153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.074206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.074310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.074337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.074458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.074516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.074658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.074705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.074919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.074966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.075088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.075142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.075386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.075433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.075659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.075714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.075863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.075915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.076012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.076038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.076229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.076279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.076395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.076451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.076609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.076664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.076797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.076849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.077049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.077103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.077254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.077308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.077526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.077553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.077736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.077786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.077918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.077969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.078149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.078206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.078391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.078441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.078602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.078655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.078853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.078906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.079003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.079029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.079232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.079283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.079433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.079490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.079630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.079681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.079858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.079906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.080079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.080128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.080266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.371 [2024-07-24 19:21:34.080316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.371 qpair failed and we were unable to recover it. 00:24:28.371 [2024-07-24 19:21:34.080460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.080519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.080649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.080701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.080878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.080929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.081051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.081105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.081233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.081259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.081355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.081381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.081508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.081560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.081771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.081822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.081921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.081948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.082101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.082128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.082317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.082367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.082459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.082490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.082699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.082751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.082889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.082950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.083071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.083126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.083269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.083321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.083500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.083531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.083721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.083771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.083953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.084005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.084184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.084209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.084327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.084379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.084538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.084565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.084734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.084785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.084905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.084957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.085156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.085208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.085413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.085460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.085628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.085679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.085824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.085881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.086029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.086080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.086212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.086260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.086452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.086512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.086609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.086636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.086773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.086825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.086981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.087006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.087147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.087199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.087328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.087381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.372 qpair failed and we were unable to recover it. 00:24:28.372 [2024-07-24 19:21:34.087577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.372 [2024-07-24 19:21:34.087603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.087760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.087813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.087931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.087984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.088120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.088172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.088297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.088360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.088456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.088489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.088609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.088662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.088843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.088891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.089009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.089067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.089255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.089281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.089474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.089527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.089700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.089749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.089847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.089873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.090011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.090064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.090239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.090265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.090451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.090514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.090614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.090641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.090800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.090852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.091039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.091088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.091204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.091260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.091435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.091487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.091591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.091630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.091814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.091841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.091974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.092025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.092171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.092197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.092379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.092430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.092659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.092707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.092830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.092887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.093020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.093074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.093208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.093258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.093361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.093390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.093554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.373 [2024-07-24 19:21:34.093604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.373 qpair failed and we were unable to recover it. 00:24:28.373 [2024-07-24 19:21:34.093736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.093788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.093972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.094022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.094147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.094201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.094409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.094464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.094648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.094696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.094836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.094887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.095100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.095154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.095353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.095406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.095583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.095635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.095821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.095871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.095994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.096046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.096094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd50190 (9): Bad file descriptor 00:24:28.374 [2024-07-24 19:21:34.096275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.096338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.096641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.096698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.096837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.096891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.097085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.097133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.097286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.097340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.097536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.097564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.097730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.097757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.097901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.097953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.098088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.098142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.098324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.098374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.098498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.098550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.098683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.098738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.098889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.098941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.099131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.099182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.099323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.099377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.099555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.099607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.099803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.099857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.100016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.100068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.100199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.100252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.100357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.100383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.100490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.100517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.100660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.100712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.100860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.100913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.101095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.101122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.374 qpair failed and we were unable to recover it. 00:24:28.374 [2024-07-24 19:21:34.101278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.374 [2024-07-24 19:21:34.101325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.101522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.101570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.101706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.101761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.101956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.102004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.102196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.102247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.102404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.102458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.102607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.102661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.102768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.102801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.102990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.103042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.103191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.103241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.103368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.103423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.103558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.103585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.103706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.103760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.103971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.104021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.104203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.104251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.104376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.104430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.104549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.104602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.104754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.104807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.104963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.105017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.105169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.105217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.105370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.105423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.105563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.105623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.105774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.105826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.105975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.106026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.106192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.106242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.106397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.106449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.106622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.106649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.106844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.106900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.107090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.107116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.107314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.107363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.107526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.107569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.107733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.107781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.107980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.108030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.108161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.108213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.108406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.108459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.108660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.108709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.375 qpair failed and we were unable to recover it. 00:24:28.375 [2024-07-24 19:21:34.108903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.375 [2024-07-24 19:21:34.108953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.109136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.109187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.109403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.109450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.109633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.109683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.109896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.109951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.110113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.110164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.110394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.110443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.110601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.110650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.110830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.110877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.111017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.111070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.111171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.111198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.111357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.111410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.111659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.111708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.111909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.111940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.112072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.112122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.112258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.112314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.112470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.112504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.112696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.112748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.112938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.112968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.113152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.113202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.113388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.113413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.113569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.113621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.113822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.113878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.114060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.114100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.114224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.114275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.114457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.114521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.114622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.114647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.114886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.114935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.115094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.115144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.115305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.115361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.115557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.115607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.115742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.115797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.115940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.115995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.116095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.116122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.116237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.116277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.116416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.116459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.376 [2024-07-24 19:21:34.116600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.376 [2024-07-24 19:21:34.116643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.376 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.116789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.116831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.116950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.116992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.117116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.117157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.117303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.117344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.117440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.117465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.117603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.117645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.117759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.117799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.117914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.117944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.118084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.118128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.118278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.118319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.118459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.118507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.118735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.118818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.119122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.119220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.119444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.119508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.119635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.119701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.119907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.119961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.120152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.120202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.120321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.120379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.120566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.120593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.120707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.120749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.120867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.120906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.121022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.121052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.121195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.121231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.121357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.121388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.121518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.121550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.121674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.121716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.121901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.121943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.122087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.122129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.122250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.122294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.122408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.122435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.122559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.122602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.122764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.122789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.122910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.122942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.123101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.123142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.123270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.123311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.377 qpair failed and we were unable to recover it. 00:24:28.377 [2024-07-24 19:21:34.123428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.377 [2024-07-24 19:21:34.123472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.123632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.123674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.123789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.123829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.123945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.123986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.124134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.124175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.124312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.124354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.124496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.124537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.124683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.124729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.124850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.124892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.125009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.125052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.125186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.125216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.125376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.125409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.125557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.125598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.125719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.125758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.125870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.125908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.126060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.126100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.126212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.126250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.126358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.126385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.126570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.126611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.126752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.126779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.126930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.126969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.127095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.127138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.127280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.127309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.127436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.127476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.127599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.127639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.127737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.127763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.127889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.127928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.128069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.128112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.128261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.128301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.128434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.128464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.378 [2024-07-24 19:21:34.128621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.378 [2024-07-24 19:21:34.128648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.378 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.128762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.128790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.128900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.128938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.129110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.129152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.129273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.129315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.129426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.129454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.129622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.129662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.129776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.129816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.129944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.129983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.130101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.130131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.130267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.130294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.130405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.130432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.130574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.130601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.130709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.130736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.130868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.130910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.131030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.131070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.131199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.131240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.131356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.131387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.131532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.131561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.131692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.131731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.131888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.131927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.132046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.132088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.132205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.132246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.132374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.132403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.132534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.132560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.132663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.132690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.132792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.132819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.132939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.132967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.133097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.133123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.133224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.133249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.133340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.133366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.133487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.133513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.133644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.133671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.133769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.133796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.133894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.133921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.134023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.134050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.134153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.379 [2024-07-24 19:21:34.134180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.379 qpair failed and we were unable to recover it. 00:24:28.379 [2024-07-24 19:21:34.134297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.134326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.134434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.134459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.134570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.134599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.134716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.134743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.134871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.134897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.134998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.135024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.135124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.135150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.135256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.135289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.135395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.135422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.135525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.135553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.135668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.135695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.135793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.135819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.135933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.135959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.136072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.136098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.136235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.136262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.136380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.136408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.136513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.136541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.136637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.136663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.136759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.136785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.136877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.136902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.137008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.137035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.137154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.137181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.137275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.137302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.137398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.137424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.137524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.137550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.137674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.137700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.137797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.137824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.137923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.137949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.138045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.138071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.138175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.138201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.138293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.138319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.138419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.138444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.138565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.138595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.138698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.138724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.138850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.138879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.138989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.139021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.380 qpair failed and we were unable to recover it. 00:24:28.380 [2024-07-24 19:21:34.139120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.380 [2024-07-24 19:21:34.139149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.139262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.139288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.139393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.139418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.139532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.139560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.139660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.139686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.139787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.139814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.139955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.139981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.140079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.140105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.140248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.140276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.140404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.140430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.140529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.140555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.140653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.140679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.140783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.140809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.140906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.140932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.141031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.141057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.141154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.141179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.141306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.141331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.141422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.141448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.141565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.141591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.141711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.141737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.141861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.141886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.141980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.142006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.142121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.142148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.142252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.142280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.142380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.142407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.142511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.142542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.142655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.142682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.142790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.142816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.142920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.142950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.143069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.143097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.143224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.143250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.143349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.143374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.143472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.143507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.143622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.143648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.143767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.143792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.381 [2024-07-24 19:21:34.143902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.381 [2024-07-24 19:21:34.143927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.381 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.144027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.144054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.144161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.144187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.144288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.144316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.144424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.144454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.144581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.144610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.144707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.144735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.144829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.144855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.144982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.145009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.145107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.145133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.145240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.145266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.145368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.145397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.145515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.145542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.145649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.145675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.145775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.145801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.145904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.145930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.146028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.146057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.146155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.146182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.146329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.146354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.146487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.146513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.146611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.146637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.146737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.146764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.146872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.146901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.147029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.147057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.147157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.147184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.147286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.147313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.147425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.147454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.147565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.147591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.147687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.147712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.147812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.147839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.147947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.147972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.148075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.148101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.148202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.148231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.148332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.148359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.148457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.148495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.148601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.148628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.382 [2024-07-24 19:21:34.148725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.382 [2024-07-24 19:21:34.148751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.382 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.148854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.148879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.148991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.149016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.149115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.149140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.149266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.149291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.149393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.149421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.149525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.149553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.149682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.149708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.149811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.149839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.149936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.149962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.150062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.150088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.150188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.150214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.150305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.150331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.150429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.150458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.150569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.150597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.150698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.150724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.150823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.150851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.150955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.150983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.151098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.151124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.151225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.151252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.151382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.151410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.151515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.151545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.151654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.151679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.151789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.151815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.151926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.151952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.152051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.152077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.152214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.152239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.152338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.152364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.152465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.152499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.152602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.152628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.152737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.383 [2024-07-24 19:21:34.152765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.383 qpair failed and we were unable to recover it. 00:24:28.383 [2024-07-24 19:21:34.152866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.152893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.152993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.153021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.153115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.153141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.153241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.153266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.153366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.153392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.153530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.153557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.153688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.153716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.153815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.153841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.153940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.153966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.154097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.154124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.154221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.154247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.154354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.154382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.154498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.154527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.154655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.154680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.154783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.154809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.154910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.154936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.155035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.155061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.155191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.155223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.155329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.155357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.155456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.155489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.155593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.155619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.155717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.155743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.155836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.155861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.155971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.155999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.156096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.156121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.156253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.156280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.156383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.156409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.156514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.156541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.156643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.156671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.156773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.156799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.156915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.156943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.157050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.157077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.157179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.157206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.157309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.157336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.157436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.157462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.157575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.384 [2024-07-24 19:21:34.157602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.384 qpair failed and we were unable to recover it. 00:24:28.384 [2024-07-24 19:21:34.157731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.157757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.157853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.157879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.157980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.158008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.158143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.158171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.158297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.158324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.158449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.158475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.158581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.158607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.158704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.158730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.158827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.158855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.158954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.158980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.159086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.159113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.159218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.159245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.159342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.159369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.159473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.159508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.159641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.159666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.159759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.159785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.159882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.159907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.160007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.160035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.160139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.160167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.160268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.160295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.160401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.160430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.160539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.160571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.160703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.160729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.160829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.160855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.160959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.160986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.161087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.161115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.161238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.161264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.161376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.161405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.161513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.161539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.161640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.161668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.161773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.161799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.161904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.161931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.162032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.162061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.162162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.162188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.162289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.385 [2024-07-24 19:21:34.162316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.385 qpair failed and we were unable to recover it. 00:24:28.385 [2024-07-24 19:21:34.162470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.162527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.162675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.162731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.162868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.162921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.163115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.163162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.163346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.163397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.163507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.163534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.163703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.163754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.163956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.164010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.164105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.164131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.164234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.164261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.164403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.164459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.164629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.164681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.164887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.164913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.165033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.165091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.165279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.165329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.165489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.165547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.165652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.165679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.165805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.165831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.165983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.166038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.166193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.166221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.166351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.166401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.166506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.166533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.166719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.166760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.166879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.166934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.167068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.167094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.167218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.167244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.167338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.167364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.167538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.167565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.167751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.167803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.167954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.168001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.168182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.168208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.168356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.168410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.168589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.168638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.168780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.168833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.168933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.168959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.386 [2024-07-24 19:21:34.169071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.386 [2024-07-24 19:21:34.169130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.386 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.169309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.169361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.169495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.169541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.169749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.169775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.169927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.169971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.170165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.170216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.170433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.170496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.170691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.170740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.170993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.171042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.171165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.171219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.171354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.171406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.171538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.171600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.171802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.171856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.171957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.171983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.172168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.172193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.172292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.172318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.172470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.172528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.172714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.172763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.172950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.173006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.173197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.173223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.173348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.173400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.173579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.173633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.173828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.173880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.174062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.174089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.174242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.174298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.174453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.174484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.174586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.174613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.174761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.174809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.174958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.175013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.175195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.175245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.175405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.175448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.175614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.175666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.175817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.175872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.176068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.176119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.176220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.176247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.176392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.387 [2024-07-24 19:21:34.176447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.387 qpair failed and we were unable to recover it. 00:24:28.387 [2024-07-24 19:21:34.176550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.176576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.176695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.176721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.176916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.176969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.177152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.177199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.177384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.177433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.177578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.177636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.177774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.177827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.178045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.178102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.178280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.178334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.178540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.178569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.178723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.178767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.178899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.178949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.179083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.179135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.179265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.179321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.179464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.179522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.179654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.179706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.179846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.179899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.180030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.180084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.180212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.180258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.180380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.180413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.180575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.180630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.180765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.180818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.180960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.181017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.181207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.181234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.181361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.181415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.181515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.181542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.181722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.181765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.181865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.181892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.182085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.182133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.182287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.182338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.182473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.388 [2024-07-24 19:21:34.182529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.388 qpair failed and we were unable to recover it. 00:24:28.388 [2024-07-24 19:21:34.182657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.182709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.182841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.182909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.183089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.183115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.183286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.183342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.183462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.183527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.183735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.183788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.183945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.183997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.184152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.184202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.184347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.184401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.184498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.184525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.184695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.184747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.184940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.184994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.185137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.185185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.185330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.185380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.185477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.185531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.185659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.185713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.185874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.185901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.186130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.186179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.186332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.186387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.186545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.186572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.186773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.186800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.186959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.187007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.187146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.187199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.187319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.187376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.187514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.187553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.187682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.187738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.187854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.187912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.188081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.188135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.188232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.188258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.188359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.188384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.188548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.188596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.188702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.188736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.188838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.188865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.189000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.189049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.189196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.189249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.389 qpair failed and we were unable to recover it. 00:24:28.389 [2024-07-24 19:21:34.189369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.389 [2024-07-24 19:21:34.189423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.189523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.189550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.189660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.189686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.189805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.189831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.190015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.190068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.190220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.190272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.190363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.190442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.190567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.190625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.190776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.190828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.190976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.191030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.191242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.191290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.191425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.191484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.191644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.191696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.191847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.191873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.191968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.191993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.192124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.192168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.192298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.192350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.192489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.192536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.192682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.192739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.192880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.192930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.193084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.193110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.193278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.193328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.193494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.193546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.193683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.193741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.193873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.193926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.194101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.194127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.194253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.194305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.194398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.194423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.194547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.194608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.194708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.194734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.194832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.194858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.195038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.195087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.195232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.195279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.195396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.195448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.195643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.195707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.195926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.195975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.196156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.390 [2024-07-24 19:21:34.196209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.390 qpair failed and we were unable to recover it. 00:24:28.390 [2024-07-24 19:21:34.196375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.196435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.196536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.196562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.196691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.196747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.196916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.196966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.197162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.197213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.197307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.197332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.197535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.197588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.197740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.197766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.197858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.197884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.197981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.198006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.198147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.198191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.198338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.198391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.198488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.198514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.198612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.198639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.198834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.198885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.199060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.199113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.199232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.199283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.199374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.199399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.199499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.199525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.199724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.199772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.199951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.200001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.200131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.200182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.200300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.200355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.200524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.200577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.200730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.200773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.200924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.200951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.201048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.201075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.201222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.201284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.201379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.201405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.201501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.201527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.201621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.201647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.201749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.201774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.201944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.201996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.202195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.202245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.202388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.391 [2024-07-24 19:21:34.202434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.391 qpair failed and we were unable to recover it. 00:24:28.391 [2024-07-24 19:21:34.202573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.202625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.202777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.202831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.202964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.203015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.203162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.203213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.203352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.203404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.203505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.203531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.203640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.203669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.203847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.203898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.204114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.204163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.204320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.204372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.204528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.204570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.204730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.204778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.204881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.204908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.205022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.205082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.205184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.205211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.205347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.205395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.205499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.205526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.205626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.205651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.205849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.205913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.206064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.206121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.206267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.206320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.206439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.206501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.206708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.206762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.206897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.206948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.207117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.207174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.207355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.207410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.207531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.207591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.207730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.207781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.207938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.207990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.208136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.208195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.208334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.208388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.208587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.208639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.208775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.392 [2024-07-24 19:21:34.208832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.392 qpair failed and we were unable to recover it. 00:24:28.392 [2024-07-24 19:21:34.209017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.209065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.209225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.209276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.209415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.209465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.209626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.209679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.209777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.209804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.209971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.210025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.210161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.210213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.210309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.210335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.210448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.210509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.210653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.210706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.210831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.210885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.211052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.211105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.211298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.211349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.211533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.211584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.211746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.211805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.211932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.211987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.212184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.212210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.212305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.212331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.212511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.212557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.212696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.212753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.212884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.212939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.213105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.213132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.213230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.213257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.213365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.213391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.213604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.213651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.213786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.213838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.213950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.213981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.214111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.214169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.214308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.214360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.214523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.393 [2024-07-24 19:21:34.214583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.393 qpair failed and we were unable to recover it. 00:24:28.393 [2024-07-24 19:21:34.214734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.214784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.214959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.215013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.215149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.215197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.215362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.215416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.215512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.215538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.215659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.215717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.215916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.215969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.216088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.216141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.216269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.216321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.216433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.216459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.216680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.216730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.216900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.216956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.217104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.217166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.217298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.217351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.217448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.217473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.217620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.217670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.217807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.217853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.217991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.218045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.218187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.218239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.218381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.218434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.218635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.218662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.218839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.218889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.219037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.219088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.219238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.219297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.219478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.219541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.219718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.219769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.219911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.219963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.220106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.220162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.220314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.220364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.220509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.220559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.220765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.220816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.220951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.221004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.394 qpair failed and we were unable to recover it. 00:24:28.394 [2024-07-24 19:21:34.221164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.394 [2024-07-24 19:21:34.221211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.221353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.221405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.221509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.221536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.221637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.221664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.221805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.221857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.222015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.222041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.222149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.222174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.222339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.222390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.222496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.222523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.222618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.222644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.222830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.222879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.223032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.223086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.223243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.223295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.223414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.223468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.223643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.223670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.223832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.223879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.223976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.224002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.224132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.224185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.224282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.224307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.224407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.224433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.224586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.224640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.224791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.224844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.224946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.224973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.225100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.225151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.225249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.225276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.225373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.225398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.225543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.225597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.225787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.225837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.225977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.226030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.226173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.226226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.226385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.226439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.226639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.226693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.226895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.226976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.227233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.227290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.227470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.227531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.227688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.227738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.227865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.395 [2024-07-24 19:21:34.227919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.395 qpair failed and we were unable to recover it. 00:24:28.395 [2024-07-24 19:21:34.228113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.228138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.228317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.228367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.228574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.228602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.228753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.228802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.228952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.229000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.229188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.229238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.229471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.229532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.229716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.229766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.229972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.230027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.230174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.230226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.230321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.230346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.230506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.230532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.230672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.230725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.230917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.230972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.231087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.231114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.231266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.231321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.231510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.231539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.231674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.231727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.231975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.232024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.232205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.232255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.232454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.232523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.232671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.232728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.232850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.232876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.232996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.233061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.233244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.233270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.233393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.233445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.233644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.233696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.233905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.233953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.234148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.234196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.234300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.234326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.234523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.234573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.396 [2024-07-24 19:21:34.234706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.396 [2024-07-24 19:21:34.234755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.396 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.234915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.234966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.235122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.235148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.235297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.235322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.235469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.235529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.235679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.235732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.235927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.235953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.236101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.236152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.236329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.236379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.236554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.236581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.236773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.236822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.236969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.237022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.237181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.237207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.237306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.237333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.237457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.237523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.237689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.237746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.237879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.237939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.238096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.238150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.238306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.238360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.238459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.238490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.238666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.238693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.238877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.238927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.239045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.239086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.239207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.239237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.239361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.239402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.239504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.239531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.239648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.239688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.239825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.239865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.239981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.240021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.240161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.240201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.240300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.240327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.240466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.240515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.240768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.240818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.240961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.241012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.241182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.241210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.241397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.241450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.241586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.397 [2024-07-24 19:21:34.241643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.397 qpair failed and we were unable to recover it. 00:24:28.397 [2024-07-24 19:21:34.241821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.241861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.242013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.242066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.242194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.242233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.242375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.242421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.242627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.242675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.242829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.242878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.243041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.243094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.243291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.243341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.243548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.243576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.243767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.243816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.244023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.244075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.244275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.244327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.244428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.244454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.244624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.244679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.244931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.244980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.245149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.245195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.245383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.245440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.245577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.245629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.245758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.245811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.245936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.245964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.246138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.246188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.246295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.246325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.246428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.246456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.246680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.246730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.246937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.246986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.247155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.247212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.247362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.398 [2024-07-24 19:21:34.247417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.398 qpair failed and we were unable to recover it. 00:24:28.398 [2024-07-24 19:21:34.247572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.247629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.247771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.247824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.247920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.247946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.248101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.248153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.248309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.248364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.248490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.248560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.248726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.248778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.248940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.248965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.249145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.249193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.249309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.249352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.249519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.249565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.249693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.249746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.249905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.249949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.250104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.250159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.250338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.250387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.250598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.250646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.250772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.250828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.250959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.251003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.251140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.251188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.251285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.251311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.251504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.251556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.251747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.251801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.251945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.252025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.252181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.252233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.252406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.252460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.252675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.252725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.252920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.252969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.253128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.253184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.253338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.253390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.253576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.253627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.253731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.253759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.253932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.253985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.254141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.254167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.254343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.254368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.254537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.254563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.254729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.254755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.399 qpair failed and we were unable to recover it. 00:24:28.399 [2024-07-24 19:21:34.254855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.399 [2024-07-24 19:21:34.254881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.255098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.255146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.255310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.255366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.255503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.255556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.255710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.255737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.255932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.255982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.256142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.256192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.256351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.256407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.256558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.256612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.256821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.256885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.257038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.257063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.257170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.257196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.257410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.257510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.257732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.257796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.258004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.258053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.258244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.258294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.258498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.258540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.258678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.258734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.258887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.258915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.259078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.259129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.259229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.259256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.259409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.259434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.259604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.259656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.259761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.259788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.259883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.259909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.260120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.260168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.260321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.260370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.260543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.260570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.260724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.260772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.260923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.260980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.261092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.261118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.261281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.261334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.261477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.261547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.261735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.261786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.261969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.262018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.262162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.262219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.400 [2024-07-24 19:21:34.262391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.400 [2024-07-24 19:21:34.262445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.400 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.262548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.262574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.262733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.262784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.262949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.262991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.263155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.263207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.263366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.263423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.263564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.263614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.263744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.263799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.263958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.264011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.264111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.264138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.264240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.264267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.264381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.264410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.264517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.264545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.264663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.264688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.264788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.264814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.264919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.264945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.265037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.265068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.265186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.265238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.265333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.265360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.265537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.265563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.265722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.265772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.265924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.265950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.266136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.266186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.266346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.266397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.266528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.266591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.266787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.266839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.266962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.267014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.267154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.267207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.267372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.267429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.267566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.267618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.267777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.267804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.267902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.267927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.268029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.268056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.268233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.268282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.268436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.268501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.268667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.268718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.401 [2024-07-24 19:21:34.268908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.401 [2024-07-24 19:21:34.268957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.401 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.269057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.269082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.269206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.269260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.269386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.269436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.269626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.269673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.269883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.269930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.270066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.270114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.270260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.270285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.270498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.270556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.270702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.270758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.270904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.270957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.271095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.271174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.271310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.271366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.271548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.271575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.271726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.271753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.271910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.271964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.272116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.272143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.272244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.272274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.272508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.272565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.272715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.272766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.272927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.272979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.273190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.273242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.273392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.273443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.273620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.273678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.273811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.273872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.274065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.274092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.274304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.274357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.274505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.274553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.274714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.274765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.274909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.274961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.275123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.275173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.275315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.275369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.275465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.275496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.275622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.275676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.275840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.275870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.276035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.276088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.276269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.402 [2024-07-24 19:21:34.276319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.402 qpair failed and we were unable to recover it. 00:24:28.402 [2024-07-24 19:21:34.276466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.276529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.403 [2024-07-24 19:21:34.276630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.276657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.403 [2024-07-24 19:21:34.276805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.276854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.403 [2024-07-24 19:21:34.277012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.277065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.403 [2024-07-24 19:21:34.277241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.277295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.403 [2024-07-24 19:21:34.277465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.277523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.403 [2024-07-24 19:21:34.277656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.277708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.403 [2024-07-24 19:21:34.277890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.277942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.403 [2024-07-24 19:21:34.278071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.278121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.403 [2024-07-24 19:21:34.278272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.278319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.403 [2024-07-24 19:21:34.278541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.278569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.403 [2024-07-24 19:21:34.278730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.278757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.403 [2024-07-24 19:21:34.278872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.278898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.403 [2024-07-24 19:21:34.278995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.279022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.403 [2024-07-24 19:21:34.279150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.279204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.403 [2024-07-24 19:21:34.279382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.279430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.403 [2024-07-24 19:21:34.279603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.279656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.403 [2024-07-24 19:21:34.279813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.279840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.403 [2024-07-24 19:21:34.279997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.280050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.403 [2024-07-24 19:21:34.280249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.280304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.403 [2024-07-24 19:21:34.280502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.280554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.403 [2024-07-24 19:21:34.280705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.280756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.403 [2024-07-24 19:21:34.280901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.403 [2024-07-24 19:21:34.280955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.403 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.281134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.281187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.281380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.281435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.281657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.281707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.281890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.281939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.282083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.282137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.282310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.282367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.282527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.282584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.282791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.282844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.283038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.283092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.283191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.283217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.283375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.283401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.283500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.283527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.283654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.283706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.283887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.283914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.284039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.284091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.284305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.284354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.284466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.284497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.284593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.284619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.284754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.284805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.284918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.284943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.285084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.285136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.285237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.285266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.285413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.285466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.285642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.285693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.285844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.285871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.286036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.286061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.286208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.286233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.286375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.286429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.286529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.286560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.286719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.286770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.286880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.286908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.287077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.287128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.287295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.287348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.287533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.287590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.287744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.287798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.404 [2024-07-24 19:21:34.287957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.404 [2024-07-24 19:21:34.288013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.404 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.288182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.288235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.288413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.288462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.288643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.288697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.288794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.288820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.288961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.289014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.289157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.289210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.289386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.289443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.289645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.289696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.289859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.289912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.290076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.290128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.290286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.290340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.290517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.290569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.290729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.290758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.290920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.290946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.291081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.291160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.291272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.291334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.291473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.291532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.291647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.291673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.291814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.291868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.292037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.292089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.292253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.292301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.292430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.292490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.292646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.292699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.292837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.292887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.293039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.293096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.293261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.293287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.293422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.293473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.293614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.293668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.293793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.293846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.293998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.294051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.294205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.294257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.294405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.294459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.294624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.294670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.294829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.294883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.295015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.295067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.405 qpair failed and we were unable to recover it. 00:24:28.405 [2024-07-24 19:21:34.295213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.405 [2024-07-24 19:21:34.295266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.295370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.295397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.295529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.295582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.295742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.295791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.295921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.295982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.296119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.296169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.296271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.296300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.296468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.296527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.296693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.296754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.296908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.296935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.297104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.297157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.297291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.297339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.297488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.297536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.297712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.297769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.297867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.297894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.298050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.298101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.298229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.298283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.298450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.298512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.298652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.298704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.298846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.298898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.299056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.299111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.299278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.299330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.299510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.299559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.299697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.299748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.299897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.299950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.300112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.300168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.300312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.300367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.300525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.300556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.300716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.300774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.300937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.300963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.301099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.301148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.301289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.301341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.301494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.301520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.301622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.301648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.301803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.301830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.301997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.406 [2024-07-24 19:21:34.302050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.406 qpair failed and we were unable to recover it. 00:24:28.406 [2024-07-24 19:21:34.302215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.302240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.302391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.302451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.302613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.302695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.302797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.302824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.302981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.303037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.303201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.303263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.303376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.303404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.303575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.303627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.303755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.303804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.303971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.303999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.304162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.304215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.304402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.304452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.304638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.304664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.304794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.304847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.305008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.305060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.305234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.305289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.305426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.305476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.305605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.305663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.305762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.305790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.305983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.306036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.306182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.306232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.306325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.306350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.306471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.306535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.306707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.306759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.306927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.306973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.307077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.307103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.307245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.307303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.307464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.307524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.307689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.307747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.307901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.307955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.308163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.308215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.308333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.308390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.308494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.308520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.308665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.308719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.308906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.308959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.309163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.309219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.407 qpair failed and we were unable to recover it. 00:24:28.407 [2024-07-24 19:21:34.309318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.407 [2024-07-24 19:21:34.309343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.309494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.309548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.309706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.309760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.309891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.309940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.310099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.310152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.310276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.310329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.310513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.310570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.310770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.310824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.310939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.311000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.311110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.311135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.311300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.311325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.311526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.311567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.311732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.311780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.311877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.311903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.312051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.312105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.312254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.312306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.312476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.312507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.312600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.312626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.312724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.312749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.312869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.312925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.313117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.313167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.313346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.313395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.313494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.313522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.313670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.313723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.313898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.313923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.314069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.314125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.314241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.314299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.314437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.314507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.314624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.314653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.314838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.314890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.315043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.315070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.315236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.408 [2024-07-24 19:21:34.315289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.408 qpair failed and we were unable to recover it. 00:24:28.408 [2024-07-24 19:21:34.315465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.315495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.315665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.315724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.315878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.315924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.316027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.316054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.316207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.316234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.316332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.316359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.316542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.316593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.316788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.316837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.316979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.317027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.317225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.317275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.317469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.317529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.317690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.317742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.317909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.317935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.318109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.318164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.318330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.318388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.318573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.318622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.318727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.318754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.318951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.319002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.319100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.319178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.319332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.319390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.319560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.319589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.319719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.319767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.319916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.319974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.320116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.320172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.320282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.320309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.320448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.320511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.320741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.320789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.320953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.321006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.321164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.321222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.321365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.321416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.321518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.321551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.321647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.321673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.321862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.321913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.322099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.322151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.322341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.322391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.322588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.322638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.409 qpair failed and we were unable to recover it. 00:24:28.409 [2024-07-24 19:21:34.322797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.409 [2024-07-24 19:21:34.322848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.323019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.323071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.323241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.323297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.323409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.323436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.323571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.323626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.323803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.323850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.324007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.324033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.324220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.324247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.324394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.324447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.324618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.324672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.324769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.324794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.324945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.324997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.325174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.325221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.325338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.325394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.325550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.325609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.325749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.325801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.325972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.326029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.326178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.326225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.326420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.326470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.326635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.326693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.326839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.326894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.327065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.327116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.327281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.327338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.327499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.327525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.327657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.327711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.327828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.327884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.328055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.328082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.328238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.328290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.328454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.328492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.328650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.328704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.328868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.328920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.329087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.329140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.329329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.329383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.329557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.329602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.329756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.329807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.329963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.330015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.410 qpair failed and we were unable to recover it. 00:24:28.410 [2024-07-24 19:21:34.330196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.410 [2024-07-24 19:21:34.330243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.330391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.330448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.330637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.330664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.330798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.330849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.331001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.331053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.331248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.331297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.331496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.331539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.331638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.331664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.331812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.331858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.332004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.332062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.332235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.332263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.332365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.332390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.332497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.332524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.332670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.332722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.332872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.332925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.333082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.333142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.333293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.333349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.333517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.333544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.333728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.333754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.333865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.333891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.334002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.334027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.334179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.334206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.334311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.334338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.334457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.334496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.334650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.334702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.334870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.334919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.335023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.335050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.335267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.335318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.335474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.335540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.335705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.335757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.335955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.336004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.336149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.336202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.336349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.336401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.336536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.336595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.336762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.336810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.336906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.336931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.411 [2024-07-24 19:21:34.337039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.411 [2024-07-24 19:21:34.337065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.411 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.337169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.337195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.337334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.337359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.337497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.337524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.337697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.337751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.337942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.337991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.338094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.338118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.338306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.338362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.338542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.338570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.338783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.338835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.338942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.338968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.339174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.339225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.339429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.339488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.339686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.339738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.339888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.339947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.340087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.340141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.340311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.340365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.340541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.340567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.340761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.340811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.340940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.340998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.341160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.341219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.341419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.341472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.341650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.341704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.341868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.341918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.342017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.342043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.342235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.342285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.342392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.342418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.342552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.342605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.342797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.342848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.342976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.343028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.343179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.343234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.343428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.343485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.343637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.343690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.343829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.343895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.344070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.344096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.344281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.344331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.412 [2024-07-24 19:21:34.344549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.412 [2024-07-24 19:21:34.344577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.412 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.344762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.344789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.344942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.344993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.345143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.345169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.345347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.345396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.345523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.345592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.345751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.345809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.345905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.345931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.346129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.346181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.346316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.346369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.346469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.346503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.346609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.346635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.346825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.346873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.346969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.346995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.347203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.347259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.347358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.347384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.347548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.347605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.347763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.347790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.347963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.348015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.348178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.348228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.348340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.348366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.348533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.348561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.348660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.348686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.348778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.348803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.348902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.348927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.349108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.349162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.349262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.413 [2024-07-24 19:21:34.349289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.413 qpair failed and we were unable to recover it. 00:24:28.413 [2024-07-24 19:21:34.349389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.414 [2024-07-24 19:21:34.349413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.414 qpair failed and we were unable to recover it. 00:24:28.414 [2024-07-24 19:21:34.349558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.414 [2024-07-24 19:21:34.349611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.414 qpair failed and we were unable to recover it. 00:24:28.414 [2024-07-24 19:21:34.349765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.414 [2024-07-24 19:21:34.349792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.414 qpair failed and we were unable to recover it. 00:24:28.414 [2024-07-24 19:21:34.349956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.414 [2024-07-24 19:21:34.350010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.414 qpair failed and we were unable to recover it. 00:24:28.414 [2024-07-24 19:21:34.350190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.414 [2024-07-24 19:21:34.350239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.414 qpair failed and we were unable to recover it. 00:24:28.414 [2024-07-24 19:21:34.350389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.414 [2024-07-24 19:21:34.350439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.414 qpair failed and we were unable to recover it. 00:24:28.414 [2024-07-24 19:21:34.350575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.414 [2024-07-24 19:21:34.350601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.414 qpair failed and we were unable to recover it. 00:24:28.414 [2024-07-24 19:21:34.350738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.414 [2024-07-24 19:21:34.350793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.414 qpair failed and we were unable to recover it. 00:24:28.414 [2024-07-24 19:21:34.350934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.414 [2024-07-24 19:21:34.350986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.414 qpair failed and we were unable to recover it. 00:24:28.414 [2024-07-24 19:21:34.351181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.414 [2024-07-24 19:21:34.351231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.414 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-24 19:21:34.351426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-24 19:21:34.351478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-24 19:21:34.351649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-24 19:21:34.351675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-24 19:21:34.351831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-24 19:21:34.351885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-24 19:21:34.352074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-24 19:21:34.352131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-24 19:21:34.352237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-24 19:21:34.352264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-24 19:21:34.352438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-24 19:21:34.352501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-24 19:21:34.352601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-24 19:21:34.352627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-24 19:21:34.352803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-24 19:21:34.352851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-24 19:21:34.353057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.353109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.353238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.353291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.353403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.353473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.353659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.353710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.353882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.353936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.354077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.354130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.354294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.354343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.354455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.354524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.354622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.354701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.354907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.354957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.355099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.355150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.355242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.355268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.355360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.355385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.355499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.355528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.355688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.355737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.355911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.355961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.356061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.356088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.356224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.356279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.356465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.356527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.356684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.356733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.356875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.356926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.357079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.357105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.357261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.357313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.357409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.357435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.357611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.357662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.357785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.357839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.357989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.358044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.358206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.358258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.358355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.358380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.358563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.358610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.358761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.358815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.358909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.358934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.359107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.359154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-24 19:21:34.359311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-24 19:21:34.359361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.359505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.359563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.359764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.359817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.359976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.360035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.360194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.360250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.360416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.360468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.360668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.360718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.360872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.360904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.361035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.361089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.361236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.361289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.361426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.361490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.361647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.361703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.361851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.361904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.362107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.362161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.362256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.362281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.362451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.362519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.362633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.362658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.362789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.362842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.362975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.363027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.363185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.363242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.363436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.363499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.363657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.363712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.363855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.363907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.364069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.364095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.364265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.364318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.364468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.364533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.364688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.364738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.364842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.364869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.365046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.365096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.365243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.365297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.365468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.365523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.365698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.365747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.365886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.365937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.366074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.366126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.366281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.366342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.366530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-24 19:21:34.366557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-24 19:21:34.366693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.366744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.366857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.366917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.367076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.367127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.367231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.367258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.367437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.367463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.367632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.367685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.367828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.367881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.367985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.368011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.368171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.368230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.368380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.368432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.368569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.368622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.368795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.368852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.369047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.369097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.369241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.369293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.369429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.369488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.369696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.369749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.369935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.369964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.370113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.370169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.370332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.370358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.370459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.370492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.370594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.370619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.370774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.370828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.370996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.371055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.371186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.371239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.371367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.371393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.371500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.371526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.371654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.371705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.371803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.371828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.371921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.371947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.372076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.372104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.372239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.372266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.372380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.372406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.372507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.372533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.372725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.372778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.372937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.372990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-24 19:21:34.373180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-24 19:21:34.373232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.373327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.373352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.373550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.373602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.373732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.373789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.373984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.374034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.374188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.374235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.374410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.374464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.374685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.374730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.374923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.374973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.375101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.375159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.375374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.375423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.375531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.375557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.375702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.375753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.375957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.375984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.376111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.376166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.376328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.376376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.376477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.376508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.376703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.376750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.376888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.376941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.377131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.377156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.377295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.377348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.377443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.377471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.377603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.377660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.377803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.377859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.377963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.377990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.378170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.378219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.378371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.378421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.378532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.378559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.378800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.378850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.379016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.379063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.379198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.379255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.379404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.379456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.379653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.379704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.379892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.379945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.380132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.380181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.380336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.380378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.701 qpair failed and we were unable to recover it. 00:24:28.701 [2024-07-24 19:21:34.380532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.701 [2024-07-24 19:21:34.380559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.380667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.380694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.380849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.380876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.381004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.381030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.381159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.381185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.381301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.381327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.381467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.381525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.381694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.381746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.381926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.381984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.382154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.382207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.382431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.382484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.382588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.382615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.382774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.382826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.382993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.383019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.383184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.383240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.383395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.383457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.383628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.383682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.383848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.383901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.384053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.384108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.384270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.384325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.384473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.384541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.384703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.384763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.384907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.384960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.385061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.385088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.385182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.385207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.385358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.385401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.385557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.385614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.385714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.702 [2024-07-24 19:21:34.385740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.702 qpair failed and we were unable to recover it. 00:24:28.702 [2024-07-24 19:21:34.385929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.385982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.386160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.386211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.386367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.386392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.386607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.386657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.386836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.386887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.387033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.387084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.387278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.387333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.387434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.387459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.387563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.387592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.387754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.387804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.387916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.387941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.388083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.388130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.388285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.388339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.388505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.388553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.388655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.388681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.388873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.388898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.389049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.389102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.389292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.389340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.389499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.389550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.389686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.389738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.389922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.389974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.390118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.390169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.390275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.390304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.390476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.390540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.390642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.390669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.390818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.390868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.391026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.391083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.391230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.391275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.391426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.391489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.391702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.391756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.391858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.391886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.392073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.703 [2024-07-24 19:21:34.392121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.703 qpair failed and we were unable to recover it. 00:24:28.703 [2024-07-24 19:21:34.392325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.392374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.392540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.392592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.392772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.392823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.392964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.393013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.393174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.393228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.393366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.393419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.393629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.393682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.393779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.393804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.393932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.393986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.394082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.394107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.394254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.394300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.394410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.394436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.394601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.394655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.394821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.394872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.394971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.394998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.395144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.395201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.395379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.395434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.395608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.395668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.395775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.395803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.395982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.396036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.396148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.396173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.396347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.396401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.396567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.396621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.396783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.396835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.396992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.397049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.397219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.397271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.397452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.397478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.397672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.397721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.397867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.397928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.398076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.398126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.398292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.398345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.398449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.398474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.398642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.398695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.398835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.398886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.704 qpair failed and we were unable to recover it. 00:24:28.704 [2024-07-24 19:21:34.399053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.704 [2024-07-24 19:21:34.399110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.399319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.399376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.399538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.399567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.399757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.399807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.399902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.399934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.400092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.400144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.400306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.400363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.400550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.400577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.400756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.400807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.400962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.401018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.401110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.401135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.401268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.401329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.401477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.401547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.401726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.401778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.401919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.401972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.402135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.402181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.402350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.402402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.402607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.402660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.402756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.402783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.402938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.402989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.403138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.403192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.403325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.403380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.403541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.403567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.403724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.403776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.403940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.403993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.404138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.404194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.404340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.404394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.404551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.404603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.404703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.404729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.404870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.404927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.405061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.405112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.405274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.405327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.405477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.705 [2024-07-24 19:21:34.405532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.705 qpair failed and we were unable to recover it. 00:24:28.705 [2024-07-24 19:21:34.405727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.405777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.405965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.406012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.406172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.406229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.406320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.406346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.406519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.406563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.406702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.406753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.406937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.406984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.407139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.407196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.407348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.407400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.407546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.407597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.407723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.407774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.407955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.408006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.408149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.408175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.408349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.408398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.408513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.408542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.408649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.408696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.408867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.408919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.409065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.409117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.409279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.409325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.409502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.409530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.409686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.409714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.409892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.409918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.410018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.410043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.410209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.410261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.410498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.410557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.410746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.410798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.410966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.411022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.411161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.411218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.411369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.411417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.411537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.411564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.411724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.411780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.411901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.411954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.412124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.412179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.412334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.706 [2024-07-24 19:21:34.412385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.706 qpair failed and we were unable to recover it. 00:24:28.706 [2024-07-24 19:21:34.412553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.412579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.412744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.412772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.412910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.412963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.413107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.413159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.413285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.413344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.413457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.413489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.413584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.413610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.413790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.413817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.413911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.413941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.414086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.414136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.414302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.414355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.414458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.414489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.414660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.414686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.414891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.414941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.415099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.415154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.415299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.415351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.415550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.415578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.415738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.415789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.415932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.415984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.416165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.416222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.416389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.416436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.416619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.416680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.416860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.416913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.417055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.417106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.417297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.417347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.417476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.417534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.417702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.417758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.417889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.417940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.418101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.418153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.418319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.418345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.707 qpair failed and we were unable to recover it. 00:24:28.707 [2024-07-24 19:21:34.418504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.707 [2024-07-24 19:21:34.418548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.418720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.418746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.418910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.418962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.419128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.419178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.419299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.419357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.419550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.419581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.419757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.419805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.419952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.420008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.420176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.420233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.420375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.420432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.420650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.420700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.420858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.420911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.421060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.421108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.421231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.421291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.421396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.421421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.421572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.421623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.421765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.421817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.421959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.422010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.422185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.422210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.422307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.422333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.422489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.422540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.422709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.422760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.422912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.422969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.423145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.423194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.423393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.423443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.423553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.423581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.423725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.423776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.423929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.423981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.424132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.424158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.424296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.424348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.424508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.424554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.424700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.708 [2024-07-24 19:21:34.424754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.708 qpair failed and we were unable to recover it. 00:24:28.708 [2024-07-24 19:21:34.424908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.424941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.425110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.425160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.425319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.425375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.425562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.425588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.425799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.425846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.426036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.426085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.426240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.426293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.426506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.426556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.426717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.426774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.426941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.426968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.427175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.427201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.427297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.427322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.427466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.427531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.427709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.427736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.427860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.427923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.428092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.428144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.428315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.428343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.428516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.428557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.428713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.428764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.428929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.428985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.429082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.429108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.429271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.429297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.429499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.429550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.429712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.429765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.429940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.429989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.430160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.430210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.430307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.430333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.430537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.430594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.430713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.430767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.430909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.430966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.431114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.431170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.431335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.431389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.431529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.431579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.431717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.431766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.431958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.432012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.432162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.709 [2024-07-24 19:21:34.432209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.709 qpair failed and we were unable to recover it. 00:24:28.709 [2024-07-24 19:21:34.432314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.432339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.432502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.432557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.432730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.432781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.432973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.433023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.433203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.433255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.433386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.433444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.433597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.433639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.433818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.433868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.434074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.434124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.434280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.434331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.434524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.434566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.434707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.434763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.434861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.434887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.435048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.435101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.435268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.435316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.435451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.435515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.435659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.435712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.435857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.435907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.436054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.436107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.436282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.436310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.436459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.436521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.436657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.436713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.436866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.436893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.437017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.437075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.437192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.437252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.437427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.437477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.437613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.437670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.437810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.437861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.437983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.438040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.438216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.438273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.438426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.438492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.438693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.438749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.438853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.438879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.439036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.439085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.439267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.439314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.710 [2024-07-24 19:21:34.439420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.710 [2024-07-24 19:21:34.439447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.710 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.439603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.439661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.439812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.439837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.439963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.440015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.440195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.440221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.440391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.440438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.440658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.440707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.440892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.440944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.441110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.441163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.441384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.441435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.441557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.441626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.441837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.441889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.442020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.442074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.442230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.442257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.442441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.442497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.442692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.442718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.442942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.442990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.443099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.443125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.443284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.443339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.443530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.443558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.443750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.443803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.443901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.443928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.444067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.444119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.444220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.444247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.444422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.444477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.444645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.444696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.444886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.444934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.445030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.445055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.445208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.445233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.445399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.445451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.445648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.445678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.445858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.445905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.446104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.446160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.446368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.446416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.446570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.711 [2024-07-24 19:21:34.446623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.711 qpair failed and we were unable to recover it. 00:24:28.711 [2024-07-24 19:21:34.446771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.446823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.447005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.447054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.447233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.447291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.447427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.447487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.447609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.447668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.447821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.447877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.448076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.448132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.448307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.448358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.448558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.448614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.448750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.448803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.448954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.448981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.449167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.449218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.449392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.449445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.449701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.449751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.449917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.449967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.450156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.450207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.450392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.450441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.450606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.450662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.450839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.450893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.450989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.451014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.451176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.451227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.451413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.451469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.451679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.451729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.451884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.451937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.452098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.452155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.452365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.452416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.452612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.452665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.452758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.452783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.452941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.452992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.453153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.453210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.453378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.453438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.453633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.453683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.712 qpair failed and we were unable to recover it. 00:24:28.712 [2024-07-24 19:21:34.453821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.712 [2024-07-24 19:21:34.453868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.454063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.454114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.454318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.454372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.454560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.454610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.454765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.454816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.454963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.455015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.455175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.455229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.455389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.455447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.455664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.455715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.455856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.455906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.456063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.456115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.456279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.456335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.456538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.456587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.456776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.456826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.456928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.456954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.457165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.457212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.457350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.457401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.457554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.457607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.457770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.457826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.458026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.458079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.458226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.458279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.458470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.458524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.458751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.458801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.458903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.458930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.459125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.459181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.459396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.459446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.459632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.459688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.459791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.459817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.459958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.460014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.460208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.460258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.460379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.460431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.460580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.713 [2024-07-24 19:21:34.460633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.713 qpair failed and we were unable to recover it. 00:24:28.713 [2024-07-24 19:21:34.460784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.460843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.461064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.461114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.461271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.461298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.461443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.461504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.461705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.461755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.461860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.461887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.462019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.462074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.462227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.462282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.462427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.462475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.462662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.462710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.462840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.462895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.463022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.463079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.463199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.463252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.463431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.463475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.463615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.463672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.463826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.463877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.463989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.464050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.464196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.464253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.464437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.464496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.464692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.464739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.464941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.464988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.465143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.465168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.465307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.465355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.465492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.465542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.465730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.465782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.465950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.465998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.466168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.466221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.466358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.466418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.466606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.466657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.466863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.466912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.467010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.467036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.467192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.467245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.714 [2024-07-24 19:21:34.467389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.714 [2024-07-24 19:21:34.467445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.714 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.467655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.467683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.467808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.467863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.468036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.468089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.468302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.468352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.468543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.468590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.468688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.468714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.468896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.468923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.469121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.469174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.469356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.469407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.469545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.469602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.469774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.469829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.470020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.470068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.470224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.470251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.470385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.470443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.470651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.470703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.470878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.470928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.471105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.471155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.471256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.471283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.471407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.471436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.471601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.471655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.471805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.471853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.471981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.472034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.472195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.472247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.472399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.472450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.472559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.472588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.472731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.472789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.472889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.472920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.473089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.473146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.473339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.473388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.473562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.473590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.473689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.473714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.473869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.473895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.474064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.474123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.474330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.474395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.474568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.474621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.715 qpair failed and we were unable to recover it. 00:24:28.715 [2024-07-24 19:21:34.474817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.715 [2024-07-24 19:21:34.474843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.475014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.475065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.475211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.475262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.475359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.475386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.475579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.475636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.475795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.475822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.475923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.475949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.476145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.476192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.476328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.476380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.476540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.476569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.476748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.476775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.476875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.476901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.477031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.477084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.477234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.477289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.477428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.477491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.477647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.477699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.477860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.477911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.478091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.478143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.478315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.478373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.478568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.478619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.478843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.478893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.479104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.479152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.479304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.479358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.479547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.479598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.479801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.479826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.479969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.480027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.480201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.480254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.480353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.480380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.480520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.480573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.480743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.480797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.481012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.481062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.481256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.481304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.481542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.481568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.481726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.481779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.481908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.481959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.482058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.716 [2024-07-24 19:21:34.482084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.716 qpair failed and we were unable to recover it. 00:24:28.716 [2024-07-24 19:21:34.482225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.482279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.482426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.482476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.482653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.482679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.482811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.482868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.483007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.483061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.483244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.483295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.483459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.483518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.483671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.483723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.483910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.483966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.484187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.484244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.484403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.484432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.484541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.484568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.484734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.484790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.484946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.484996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.485206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.485256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.485417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.485470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.485617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.485669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.485806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.485871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.486041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.486091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.486298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.486360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.486534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.486560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.486668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.486695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.486855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.486906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.487030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.487088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.487261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.487318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.487466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.487522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.487626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.487651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.487817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.487873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.488010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.488061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.488230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.488256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.488408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.488433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.488571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.488624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.488788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.488844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.489028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.717 [2024-07-24 19:21:34.489079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.717 qpair failed and we were unable to recover it. 00:24:28.717 [2024-07-24 19:21:34.489209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.489262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.489398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.489455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.489593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.489655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.489750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.489777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.489938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.489992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.490154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.490179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.490284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.490309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.490445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.490502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.490595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.490621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.490802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.490859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.491045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.491096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.491274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.491331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.491529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.491555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.491763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.491812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.491912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.491937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.492036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.492062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.492159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.492184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.492282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.492307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.492408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.492435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.492601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.492657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.492755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.492781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.492890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.492955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.493089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.493144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.493305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.493331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.493471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.493541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.493674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.493727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.493884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.493937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.494090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.494117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.494273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.494327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.494433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.494466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.494582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.494609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.494770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.718 [2024-07-24 19:21:34.494827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.718 qpair failed and we were unable to recover it. 00:24:28.718 [2024-07-24 19:21:34.494970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.495027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.495129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.495209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.495356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.495407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.495576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.495628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.495769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.495821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.495979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.496022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.496189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.496238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.496473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.496537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.496642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.496668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.496763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.496788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.496953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.496978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.497127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.497178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.497340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.497399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.497643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.497695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.497794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.497821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.497983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.498008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.498130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.498182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.498338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.498388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.498522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.498567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.498726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.498776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.498975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.499028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.499156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.499216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.499411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.499462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.499644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.499671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.499772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.499810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.499913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.499939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.500117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.500163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.500268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.500296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.500467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.500529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.500691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.500745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.500927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.500975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.501154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.719 [2024-07-24 19:21:34.501202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.719 qpair failed and we were unable to recover it. 00:24:28.719 [2024-07-24 19:21:34.501361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.501414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.501580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.501636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.501853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.501902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.502048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.502102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.502200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.502226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.502355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.502403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.502585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.502639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.502817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.502869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.502988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.503042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.503182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.503239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.503376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.503433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.503581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.503634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.503763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.503817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.503955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.504005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.504161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.504186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.504287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.504314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.504449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.504506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.504600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.504625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.504769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.504820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.504992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.505052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.505236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.505291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.505506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.505551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.505715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.505769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.505962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.506011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.506204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.506258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.506382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.506437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.506603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.506657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.506850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.506877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.507021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.507078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.507223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.507278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.507396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.507423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.507531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.507559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.507711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.507742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.507941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.507993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.508119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.720 [2024-07-24 19:21:34.508176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.720 qpair failed and we were unable to recover it. 00:24:28.720 [2024-07-24 19:21:34.508376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.508431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.508570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.508628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.508782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.508835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.509007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.509060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.509205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.509255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.509457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.509512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.509709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.509759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.509874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.509901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.510068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.510093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.510266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.510325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.510495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.510548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.510736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.510784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.510923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.510977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.511112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.511164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.511258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.511283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.511476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.511543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.511704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.511756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.511896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.511952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.512106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.512160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.512329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.512385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.512564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.512616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.512766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.512793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.512938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.512965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.513127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.513180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.513338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.513401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.513564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.513620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.513816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.513871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.514025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.514051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.514222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.514273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.514500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.514558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.514742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.514790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.514942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.514997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.515145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.515199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.515321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.515380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.515531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.721 [2024-07-24 19:21:34.515558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.721 qpair failed and we were unable to recover it. 00:24:28.721 [2024-07-24 19:21:34.515725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.515782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.515935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.515991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.516086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.516113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.516290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.516347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.516531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.516562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.516724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.516778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.516916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.516968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.517154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.517201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.517393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.517450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.517632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.517676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.517830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.517885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.518028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.518080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.518251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.518301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.518473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.518521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.518694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.518747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.518880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.518933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.519042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.519070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.519244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.519297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.519506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.519548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.519692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.519748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.519906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.519958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.520120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.520170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.520318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.520370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.520531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.520560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.520658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.520685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.520780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.520806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.520945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.520997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.521137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.521195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.521366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.521414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.521562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.521622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.521828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.521878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.522048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.522102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.522276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.522328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.722 [2024-07-24 19:21:34.522512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.722 [2024-07-24 19:21:34.522564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.722 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.522737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.522788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.522936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.522962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.523060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.523085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.523246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.523272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.523434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.523460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.523637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.523662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.523817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.523870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.524040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.524097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.524234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.524291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.524453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.524511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.524639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.524688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.524810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.524861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.525013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.525062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.525252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.525304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.525441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.525508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.525658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.525709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.525909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.525962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.526155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.526208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.526426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.526476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.526648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.526697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.526906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.526959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.527113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.527155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.527307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.527361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.527518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.527570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.527665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.527690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.527854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.527902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.528042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.528092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.528230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.528281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.528457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.528487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.723 [2024-07-24 19:21:34.528636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.723 [2024-07-24 19:21:34.528690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.723 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.528890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.528947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.529115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.529166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.529318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.529369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.529530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.529556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.529716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.529769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.529865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.529891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.530019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.530075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.530176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.530202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.530372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.530420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.530651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.530707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.530918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.530966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.531066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.531092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.531279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.531332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.531509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.531564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.531723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.531777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.531877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.531902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.532091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.532116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.532224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.532253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.532434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.532500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.532666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.532728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.532827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.532907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.533107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.533158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.533292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.533343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.533544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.533575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.533724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.533778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.533988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.534039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.534198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.534248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.534435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.534497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.534643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.534694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.534843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.534892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.535104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.535131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.535284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.535336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.535542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.535593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.724 qpair failed and we were unable to recover it. 00:24:28.724 [2024-07-24 19:21:34.535780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.724 [2024-07-24 19:21:34.535828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.535960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.536016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.536171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.536216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.536362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.536417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.536603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.536653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.536809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.536858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.537049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.537097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.537193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.537219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.537389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.537415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.537554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.537606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.537796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.537847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.538018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.538069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.538169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.538195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.538348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.538406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.538625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.538684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.538875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.538925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.539088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.539113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.539210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.539237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.539425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.539476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.539681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.539730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.539873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.539926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.540103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.540150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.540301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.540354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.540539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.540567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.540667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.540693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.540859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.540909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.541060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.541112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.541292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.541318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.541464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.541529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.541721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.541770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.541947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.541973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.542113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.542164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.542322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.542373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.542535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.542564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.542743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.542794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.542928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.725 [2024-07-24 19:21:34.542982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.725 qpair failed and we were unable to recover it. 00:24:28.725 [2024-07-24 19:21:34.543174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.543201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.543388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.543414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.543512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.543539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.543716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.543765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.543926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.543983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.544165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.544213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.544402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.544453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.544693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.544744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.544886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.544939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.545115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.545162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.545388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.545438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.545599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.545653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.545820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.545847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.546013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.546038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.546175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.546228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.546398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.546449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.546624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.546683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.546822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.546873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.547005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.547060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.547239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.547266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.547366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.547393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.547492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.547518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.547664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.547714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.547880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.547935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.548079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.548131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.548226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.548251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.548454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.548514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.548628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.548687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.548846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.548903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.548994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.549019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.549176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.549222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.549384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.549440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.549622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.549649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.549809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.549860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.550042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.550091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.550291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.726 [2024-07-24 19:21:34.550345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.726 qpair failed and we were unable to recover it. 00:24:28.726 [2024-07-24 19:21:34.550503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.550552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.550711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.550765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.550899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.550959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.551110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.551168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.551322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.551369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.551529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.551556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.551655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.551682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.551812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.551838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.551962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.551988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.552109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.552136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.552264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.552290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.552406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.552435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.552549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.552576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.552679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.552707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.552838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.552863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.553013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.553065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.553187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.553214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.553381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.553426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.553597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.553648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.553810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.553866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.554014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.554070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.554202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.554255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.554409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.554466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.554581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.554610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.554778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.554828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.555021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.555068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.555232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.555285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.555455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.555515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.555679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.555706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.555930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.555983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.556156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.556209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.556374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.556401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.556492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.556519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.556687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.727 [2024-07-24 19:21:34.556713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.727 qpair failed and we were unable to recover it. 00:24:28.727 [2024-07-24 19:21:34.556830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.556889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.557055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.557101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.557304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.557360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.557535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.557562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.557711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.557763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.557977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.558027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.558200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.558250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.558416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.558464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.558578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.558603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.558783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.558833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.559060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.559113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.559269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.559321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.559515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.559542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.559668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.559722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.559844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.559896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.560075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.560133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.560449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.560512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.560711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.560759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.560933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.560991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.561180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.561236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.561399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.561453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.561669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.561719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.561881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.561930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.562092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.562139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.562327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.562387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.562533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.562559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.562721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.562749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.562912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.562969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.563120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.563171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.563330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.563359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.728 qpair failed and we were unable to recover it. 00:24:28.728 [2024-07-24 19:21:34.563547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.728 [2024-07-24 19:21:34.563597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.563759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.563807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.563960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.564017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.564176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.564228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.564427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.564489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.564637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.564684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.564780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.564805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.564909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.564936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.565087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.565138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.565346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.565398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.565528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.565587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.565737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.565789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.565975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.566027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.566152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.566208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.566361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.566387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.566545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.566597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.566708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.566734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.566883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.566908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.567036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.567097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.567254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.567311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.567419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.567444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.567552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.567635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.567782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.567836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.568015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.568066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.568281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.568334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.568460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.568523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.568661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.568714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.568841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.568902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.569052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.569078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.569171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.569197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.569372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.569421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.569594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.569645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.569801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.569858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.570015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.570043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.570201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.570252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.729 [2024-07-24 19:21:34.570408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.729 [2024-07-24 19:21:34.570437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.729 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.570607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.570667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.570804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.570858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.571027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.571054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.571166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.571193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.571300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.571327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.571530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.571558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.571767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.571819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.571982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.572035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.572213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.572265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.572406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.572456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.572570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.572635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.572773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.572830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.572997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.573052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.573207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.573260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.573418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.573469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.573697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.573749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.573937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.574015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.574278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.574338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.574528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.574556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.574719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.574745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.574895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.574947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.575147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.575197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.575371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.575423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.575614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.575672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.575824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.575876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.575991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.576048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.576232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.576285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.576491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.576536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.576703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.576755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.576939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.576990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.577158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.577207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.577323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.577380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.577546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.577573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.577744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.577794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.730 [2024-07-24 19:21:34.577952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.730 [2024-07-24 19:21:34.577981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.730 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.578147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.578201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.578398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.578446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.578618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.578669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.578807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.578864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.579010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.579062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.579157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.579184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.579352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.579379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.579492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.579518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.579668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.579724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.579822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.579848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.580052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.580101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.580262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.580310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.580529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.580583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.580756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.580811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.580997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.581024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.581191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.581218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.581354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.581411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.581523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.581551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.581713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.581762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.581916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.581971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.582092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.582149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.582326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.582382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.582545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.582572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.582765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.582814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.582960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.583016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.583210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.583264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.583367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.583394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.583540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.583581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.583752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.583806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.583961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.583987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.584083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.584109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.584200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.584226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.584391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.584443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.584609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.584665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.731 [2024-07-24 19:21:34.584803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.731 [2024-07-24 19:21:34.584828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.731 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.584936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.584965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.585117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.585170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.585324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.585376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.585469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.585507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.585665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.585720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.585849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.585902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.586072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.586118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.586264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.586321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.586494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.586540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.586663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.586718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.586863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.586917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.587059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.587110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.587271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.587327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.587426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.587456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.587641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.587702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.587867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.587917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.588104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.588161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.588305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.588359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.588525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.588553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.588677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.588728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.588907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.588956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.589132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.589187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.589343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.589397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.589584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.589635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.589804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.589854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.590014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.590066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.590192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.590243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.590416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.590471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.590665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.590718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.590817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.590843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.590979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.591032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.591187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.591214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.591399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.591449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.591593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.732 [2024-07-24 19:21:34.591654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.732 qpair failed and we were unable to recover it. 00:24:28.732 [2024-07-24 19:21:34.591813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.591867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.591989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.592049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.592187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.592240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.592434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.592506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.592703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.592755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.592906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.592960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.593178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.593257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.593534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.593563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.593720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.593774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.593980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.594034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.594182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.594208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.594399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.594449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.594618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.594676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.594877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.594935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.595076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.595130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.595296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.595321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.595461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.595525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.595672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.595713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.595872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.595929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.596102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.596127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.596287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.596336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.596475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.596543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.596655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.596715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.596885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.596936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.597082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.597137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.597295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.597346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.597499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.597546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.597717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.597743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.597876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.733 [2024-07-24 19:21:34.597926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.733 qpair failed and we were unable to recover it. 00:24:28.733 [2024-07-24 19:21:34.598025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.598052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.598232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.598283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.598530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.598556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.598740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.598791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.598960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.598990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.599169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.599221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.599357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.599415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.599597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.599655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.599826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.599878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.600077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.600103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.600290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.600339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.600505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.600557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.600724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.600775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.600948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.600975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.601074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.601100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.601243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.601300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.601430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.601492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.601706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.601738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.601895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.601922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.602099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.602154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.602293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.602346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.602530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.602557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.602723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.602779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.602946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.602973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.603071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.603097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.603266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.603323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.603465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.603522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.603715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.603765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.603924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.603974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.604141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.604192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.604335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.604389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.604554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.604617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.734 qpair failed and we were unable to recover it. 00:24:28.734 [2024-07-24 19:21:34.604770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.734 [2024-07-24 19:21:34.604822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.604970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.605028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.605192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.605217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.605379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.605436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.605604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.605653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.605748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.605774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.605876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.605902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.606073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.606121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.606330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.606380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.606524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.606571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.606748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.606801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.606994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.607043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.607203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.607267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.607444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.607505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.607640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.607694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.607880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.607936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.608074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.608128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.608267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.608333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.608508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.608535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.608706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.608757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.608925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.608980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.609118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.609176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.609355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.609407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.609513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.609540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.609693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.609745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.609910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.609936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.610125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.610180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.610309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.610363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.610536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.610563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.610717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.610770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.610927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.610980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.611172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.611226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.611367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.611420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.611611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.611662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.611800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.611854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.612006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.735 [2024-07-24 19:21:34.612059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.735 qpair failed and we were unable to recover it. 00:24:28.735 [2024-07-24 19:21:34.612230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.612255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.612414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.612465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.612648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.612675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.612891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.612969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.613231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.613292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.613497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.613551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.613726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.613779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.613932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.613981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.614088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.614114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.614279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.614333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.614516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.614559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.614702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.614756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.614935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.614962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.615105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.615157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.615329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.615356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.615509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.615561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.615737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.615788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.615896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.615922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.616035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.616095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.616269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.616320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.616421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.616447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.616580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.616631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.616813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.616869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.617016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.617074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.617234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.617290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.617434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.617491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.617634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.617690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.617885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.617940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.618063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.618116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.618313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.618361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.618542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.618571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.618729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.736 [2024-07-24 19:21:34.618756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-07-24 19:21:34.618922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.618973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.619112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.619163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.619326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.619353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.619535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.619564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.619739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.619794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.619942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.619989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.620176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.620203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.620386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.620433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.620593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.620650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.620841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.620885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.621078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.621130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.621261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.621318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.621530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.621558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.621699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.621753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.621897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.621949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.622122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.622148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.622312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.622359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.622459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.622492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.622661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.622719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.622864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.622918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.623080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.623126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.623224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.623250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.623359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.623387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.623524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.623587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.623709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.623735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.623865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.623922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.624133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.624183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.624350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.624401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.624567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.624620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.624722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.624748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.624925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.624977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.625121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.625169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.625316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.625342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.625537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.625563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.625737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.625789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.625956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.737 [2024-07-24 19:21:34.626008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.737 qpair failed and we were unable to recover it. 00:24:28.737 [2024-07-24 19:21:34.626165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.626218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.626345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.626397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.626611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.626669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.626764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.626790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.626943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.626970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.627068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.627094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.627231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.627282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.627453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.627516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.627614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.627640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.627782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.627831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.627984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.628034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.628206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.628259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.628466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.628527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.628622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.628647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.628802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.628852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.629016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.629068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.629220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.629276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.629433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.629489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.629615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.629671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.629814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.629866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.630021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.630077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.630279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.630336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.630540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.630567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.630757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.630806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.630956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.631008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.631106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.631183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.631375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.631425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.631591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.631643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.631741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.631768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.631909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.631964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.738 qpair failed and we were unable to recover it. 00:24:28.738 [2024-07-24 19:21:34.632094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.738 [2024-07-24 19:21:34.632148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.632286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.632339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.632494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.632544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.632707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.632758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.632970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.633020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.633186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.633230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.633383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.633408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.633508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.633535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.633712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.633766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.633917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.633969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.634122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.634150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.634295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.634347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.634533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.634559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.634713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.634766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.634929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.634981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.635119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.635173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.635356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.635406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.635508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.635535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.635688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.635742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.635910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.635967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.636117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.636169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.636335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.636383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.636552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.636607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.636702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.636727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.636898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.636959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.637108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.637159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.637260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.637290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.637421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.637473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.637574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.637600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.637759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.637808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.637981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.638032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.638198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.638263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.739 qpair failed and we were unable to recover it. 00:24:28.739 [2024-07-24 19:21:34.638438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.739 [2024-07-24 19:21:34.638495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.638686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.638712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.638856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.638907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.639062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.639115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.639282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.639310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.639471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.639532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.639636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.639663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.639762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.639789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.639967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.640018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.640179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.640235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.640381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.640434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.640620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.640678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.640858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.640906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.641079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.641106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.641275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.641327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.641511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.641553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.641711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.641761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.641928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.641978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.642136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.642187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.642365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.642418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.642521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.642547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.642652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.642678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.642818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.642870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.643017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.643070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.643243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.643268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.643369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.643397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.643561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.643613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.643819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.643871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.643966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.643993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.644142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.644168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.644368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.644424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.644584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.644634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.644846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.644896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.645041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.645092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.645239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.740 [2024-07-24 19:21:34.645295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.740 qpair failed and we were unable to recover it. 00:24:28.740 [2024-07-24 19:21:34.645475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.645533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.645685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.645712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.645812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.645837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.646004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.646061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.646236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.646288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.646452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.646513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.646675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.646704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.646844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.646903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.647074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.647124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.647292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.647340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.647527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.647554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.647758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.647806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.647906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.647933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.648036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.648063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.648197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.648248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.648410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.648437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.648542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.648568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.648730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.648783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.648981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.649032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.649173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.649199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.649304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.649331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.649530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.649558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.649717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.649769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.649940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.649968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.650132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.650158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.650325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.650379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.650569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.650620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.650818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.650868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.651031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.651056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.651210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.651266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.651463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.651521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.651705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.651758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.651897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.651950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.652088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.652145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.652242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.741 [2024-07-24 19:21:34.652267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.741 qpair failed and we were unable to recover it. 00:24:28.741 [2024-07-24 19:21:34.652375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.652402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.652555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.652606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.652740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.652795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.652959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.653011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.653154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.653211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.653394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.653444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.653636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.653662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.653791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.653843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.653981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.654032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.654133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.654160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.654257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.654284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.654394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.654420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.654584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.654642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.654777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.654829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.655015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.655067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.655217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.655243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.655398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.655453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.655645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.655693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.655882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.655936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.656128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.656178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.656332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.656384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.656497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.656526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.656652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.656711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.656835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.656893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.656998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.657024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.657137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.657163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.657338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.657391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.657554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.657606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.657762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.657819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.657971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.658023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.658181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.658233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.658460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.658516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.658669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.658723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.658902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.658953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.659099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.659152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.659299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.742 [2024-07-24 19:21:34.659351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.742 qpair failed and we were unable to recover it. 00:24:28.742 [2024-07-24 19:21:34.659532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.659561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.659725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.659776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.659956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.660009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.660214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.660265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.660390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.660442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.660631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.660681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.660780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.660806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.660952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.660992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.661198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.661225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.661326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.661357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.661508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.661555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.661692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.661743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.661927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.661984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.662222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.662274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.662466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.662521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.662688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.662738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.662910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.662966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.663119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.663146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.663294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.663347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.663541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.663568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.663739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.663796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.663931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.663985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.664155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.664206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.664375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.664439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.664599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.664651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.664826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.664878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.664978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.665004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.665146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.665199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.665312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.665372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.665549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.665602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.665792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.665841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.666028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.666079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.743 [2024-07-24 19:21:34.666192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.743 [2024-07-24 19:21:34.666217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.743 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.666359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.666414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.666537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.666593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.666732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.666790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.666934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.666986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.667114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.667166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.667383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.667434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.667628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.667680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.667849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.667899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.668090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.668140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.668293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.668346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.668504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.668553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.668660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.668685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.668832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.668887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.669033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.669092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.669277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.669326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.669494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.669546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.669693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.669745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.669874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.669928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.670119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.670171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.670326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.670382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.670570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.670619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.670786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.670840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.671017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.671069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.671222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.671275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.671453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.671511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.671721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.671770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.671921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.671967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.672133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.672185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.672392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.672447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.672600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.672652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.672814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.672865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.673068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.673117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.673369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.673420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.673620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.673648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.673751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.673779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.673932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.673981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.674077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.674103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.674249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.674300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.674447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.674517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.674618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.674645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.674758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.744 [2024-07-24 19:21:34.674784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.744 qpair failed and we were unable to recover it. 00:24:28.744 [2024-07-24 19:21:34.674992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.675044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.675246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.675297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.675469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.675526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.675664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.675718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.675891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.675941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.676049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.676076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.676234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.676287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.676385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.676411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.676561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.676619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.676719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.676746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.676844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.676871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.677010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.677067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.677235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.677289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.677494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.677539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.677641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.677669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.677811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.677861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.678021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.678048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.678191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.678248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.678496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.678551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.678741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.678793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.678987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.679036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.679219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.679271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.679372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.679402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.679597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.679648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.679859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.679910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.680078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.680104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.680201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.680228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.680370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.680422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.680607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.680658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.680890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.680946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.681132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.681180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.681277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.681304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.681461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.681496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.681653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.681703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.681799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.681825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.681987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.682014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.682174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.682226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.682426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.682484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.682645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.682697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.682928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.682980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.683076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.683103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.745 [2024-07-24 19:21:34.683227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.745 [2024-07-24 19:21:34.683280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.745 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.683441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.683466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.683647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.683704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.683805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.683832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.684000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.684054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.684170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.684238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.684425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.684484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.684665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.684715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.684817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.684844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.685048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.685099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.685200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.685227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.685371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.685425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.685552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.685609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.685725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.685779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.685878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.685905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.686013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.686042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.686156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.686182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.686308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.686334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.686465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.686496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.686689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.686736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.686870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.686927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.687087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.687144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.687249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.687276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.687466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.687527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.687658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.687719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.687817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.687843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.687982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.688034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:28.746 [2024-07-24 19:21:34.688175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.746 [2024-07-24 19:21:34.688228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:28.746 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.688406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.688464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.688643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.688695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.688813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.688876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.689049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.689102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.689236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.689287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.689381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.689407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.689515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.689543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.689642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.689668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.689826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.689882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.690006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.690055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.690248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.690297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.690503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.690543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.690672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.690723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.690848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.690905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.691093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.691147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.691331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.691357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.691527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.691554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.691700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.691753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.691899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.691952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.692086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.692140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.692304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.692355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.692514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.692554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.692727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.692778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.692963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.692989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.693125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.693183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.693330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.693382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.693548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.693596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.693786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.693840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.693946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.693973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.694109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.694163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-24 19:21:34.694299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-24 19:21:34.694352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.694527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.694554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.694675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.694728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.694875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.694929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.695075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.695130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.695276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.695330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.695504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.695550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.695709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.695760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.695858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.695884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.696007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.696060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.696183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.696235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.696364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.696416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.696530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.696558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.696778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.696834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.696981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.697035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.697137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.697163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.697286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.697348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.697529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.697556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.697695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.697746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.697899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.697948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.698103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.698158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.698357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.698406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.698528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.698586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.698715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.698768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.698951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.699000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.699098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.699123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.699270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.699321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.699447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.699474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.699600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.699658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.699858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.699911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.700008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.700034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.700188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.700238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.700372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.700423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.700532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.700559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.700690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.700744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.700897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-24 19:21:34.700950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-24 19:21:34.701098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.701152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.701304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.701330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.701493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.701520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.701679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.701727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.701862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.701915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.702048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.702098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.702277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.702327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.702432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.702459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.705619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.705673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.705882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.705936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.706090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.706143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.706314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.706364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.706609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.706658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.706794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.706850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.707033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.707084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.707229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.707290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.707492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.707537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.707638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.707665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.707765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.707791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.707954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.708007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.708200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.708255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.708431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.708489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.708678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.708726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.708903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.708928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.709095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.709121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.709318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.709364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.709575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.709629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.709770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.709823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.709979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.710031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.710159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.710216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.710345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.710400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.710614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.710664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.710854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.710901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.711109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.711157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.711391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.711443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.711627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-24 19:21:34.711653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-24 19:21:34.711818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.711844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.712003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.712053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.712244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.712293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.712476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.712529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.712745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.712772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.712888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.712918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.713080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.713137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.713346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.713394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.713577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.713604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.713786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.713837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.713989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.714039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.714232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.714283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.714423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.714475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.714614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.714669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.714869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.714916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.715111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.715137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.715400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.715450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.715663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.715717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.715921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.715975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.716203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.716254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.716419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.716446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.716655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.716707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.716890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.716939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.717179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.717235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.717414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.717467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.717626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.717676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.717869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.717919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.718161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.718210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.718358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.718405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.718643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.718695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.718912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.718966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.719114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.719167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.719361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.719411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.719622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.719676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.719838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.719865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-24 19:21:34.720012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-24 19:21:34.720062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.720262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.720313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.720508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.720561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.720751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.720798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.720945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.720998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.721204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.721254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.721434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.721493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.721660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.721715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.721918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.721971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.722115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.722167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.722398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.722448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.722670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.722721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.722875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.722927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.723101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.723151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.723374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.723424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.723605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.723658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.723917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.723963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.724157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.724208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.724442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.724497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.724686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.724732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.724914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.724940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.725179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.725228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.725405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.725430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.725536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.725563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.725793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.725840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.726059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.726110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.726236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.726290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.726491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.726548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.726781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.726831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.727086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.727135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.727313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.727365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.727568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.727618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.727754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.727807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-24 19:21:34.728005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-24 19:21:34.728051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.728291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.728339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.728532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.728558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.728706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.728757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.728981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.729029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.729193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.729245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.729439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.729602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.729804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.729831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.730001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.730028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.730188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.730248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.730443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.730505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.730699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.730725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.730857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.730914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.731026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.731053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.731266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.731315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.731533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.731559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.731743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.731791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.731996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.732046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.732220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.732245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.732345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.732376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.732509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.732567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.732777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.732826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.733031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.733081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.733333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.733380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.733539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.733592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.733754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.733780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.733984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.734034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.734246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.734293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.734425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.734476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.734679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.734734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.734894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.734942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.735131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-24 19:21:34.735182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-24 19:21:34.735313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.735367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.735469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.735502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.735766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.735815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.735979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.736027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.736278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.736326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.736528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.736554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.736717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.736742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.736892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.736918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.737035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.737062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.737255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.737307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.737419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.737475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.737641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.737686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.737815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.737871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.738037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.738090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.738268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.738294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.738538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.738564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.738765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.738814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.738973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.739026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.739222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.739271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.739446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.739502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.739698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.739748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.739878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.739937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.740132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.740181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.740310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.740367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.740572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.740622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.740804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.740857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.741016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.741041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.741220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.741270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.741382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.741411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.741622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.741668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.741852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.741898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.742053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.742108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.742335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.742396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.742556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.742586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.742772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.742826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.743023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-24 19:21:34.743049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-24 19:21:34.743288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.743337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.743464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.743527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.743718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.743768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.743965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.744015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.744203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.744256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.744458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.744519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.744700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.744747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.744887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.744941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.745058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.745112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.745230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.745289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.745419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.745470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.745636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.745685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.745783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.745809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.745906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.745933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.746180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.746230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.746420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.746469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.746619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.746671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.746814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.746866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.747064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.747116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.747257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.747284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.747536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.747563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.747770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.747818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.747922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.747951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.748151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.748204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.748400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.748454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.748608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.748658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.748852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.748905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.749040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.749089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.749318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.749371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.749559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.749585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.749779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.749829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.750074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.750123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.750378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.750439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.750659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.750713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.750964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.751014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.751114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-24 19:21:34.751141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-24 19:21:34.751291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.751317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.751551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.751597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.751775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.751826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.751986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.752012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.752199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.752250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.752439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.752500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.752696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.752743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.752937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.752992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.753165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.753214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.753408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.753464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.753601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.753660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.753854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.753905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.754075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.754114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.754253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.754305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.754461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.754520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.754620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.754647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.754829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.754880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.755116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.755166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.755391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.755444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.755569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.755627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.755782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.755826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.755984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.756034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.756212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.756261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.756453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.756514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.756774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.756821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.756973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.757026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.757188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.757237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.757443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.757498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.757717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.757771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.757926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.757978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.758168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.758219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.758369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.758422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.758534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.758591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.758714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.758772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.758932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.758985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.759151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-24 19:21:34.759207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-24 19:21:34.759351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.759401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.759611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.759661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.759853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.759902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.760100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.760154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.760308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.760358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.760570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.760619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.760745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.760800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.760954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.761002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.761200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.761251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.761406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.761460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.761664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.761716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.761898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.761923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.762119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.762165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.762278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.762307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.762467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.762530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.762684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.762740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.762872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.762927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.763078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.763104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.763278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.763328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.763466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.763527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.763717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.763771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.763920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.763974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.764161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.764213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.764332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.764384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.764504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.764530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.764708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.764757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.764943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.764991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.765129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.765178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.765303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.765356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.765502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.765548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-24 19:21:34.765646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-24 19:21:34.765672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.765906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.765957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.766081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.766136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.766295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.766345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.766451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.766485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.766617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.766671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.766846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.766899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.767106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.767155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.767286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.767340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.767469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.767537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.767711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.767763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.767956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.768012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.768248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.768302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.768498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.768547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.768688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.768737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.768911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.768961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.769131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.769181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.769381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.769432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.769601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.769655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.769811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.769838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.770008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.770061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.770213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.770266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.770364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.770390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.770584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.770635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.770786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.770843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.770965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.771022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.771120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.771145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.771302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.771327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.771518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.771568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.771763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.771789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.771990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.772018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.772165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.772217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.772323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.772351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.772452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.772486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.772634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.772689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.772862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-24 19:21:34.772913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-24 19:21:34.773083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.773135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.773309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.773362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.773517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.773561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.773661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.773687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.773863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.773889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.774028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.774081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.774283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.774334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.774459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.774526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.774716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.774769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.774952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.775004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.775226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.775283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.775437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.775464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.775720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.775770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.775901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.775953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.776097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.776151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.776250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.776283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.776379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.776406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.776598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.776646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.776780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.776836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.777021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.777048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.777245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.777296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.777431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.777500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.777628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.777681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.777820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.777873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.778008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.778062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.778165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.778191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.778374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.778423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.778522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.778548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.778649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.778675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.778860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.778887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.779026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.779079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.779263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.779313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.779462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.779523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.779717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.779766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.779908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.779953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-24 19:21:34.780135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-24 19:21:34.780191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.780319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.780371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.780550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.780579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.780743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.780795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.780961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.780988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.781172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.781221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.781406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.781457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.781720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.781799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.782079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.782135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.782327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.782375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.782553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.782604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.782782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.782809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.783010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.783062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.783253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.783301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.783399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.783426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.783528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.783556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.783686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.783745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.783942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.783995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.784155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.784182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.784331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.784381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.784574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.784628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.784817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.784870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.785016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.785069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.785183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.785242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.785378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.785432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.785610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.785658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.785798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.785851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.786019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.786070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.786221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.786275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.786377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.786404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.786505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.786531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.786707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.786758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.786950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.787005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.787144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.787195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.787370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.787424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.787580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.787633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-24 19:21:34.787830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-24 19:21:34.787881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.788056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.788110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.788324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.788377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.788492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.788519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.788716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.788769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.788924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.788951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.789156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.789207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.789303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.789331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.789542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.789569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.789713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.789760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.789947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.789994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.790167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.790209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.790335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.790399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.790515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.790543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.790704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.790757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.790945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.790994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.791218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.791265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.791410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.791463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.791612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.791663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.791797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.791841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.791976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.792023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.792247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.792297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.792539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.792565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.792756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.792806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.792948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.792999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.793235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.793286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.793445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.793471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.793658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.793712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.793851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.793914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.794109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.794159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.794362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.794388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.794547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.794575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.794770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.794821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.795043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.795093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.795245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.795304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.795459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.795520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-24 19:21:34.795710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-24 19:21:34.795759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.795939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.795992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.796144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.796175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.796304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.796357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.796508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.796560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.796738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.796789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.796975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.797025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.797139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.797198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.797354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.797381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.797565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.797592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.797744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.797800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.797999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.798049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.798229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.798278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.798510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.798559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.798733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.798783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.798961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.799009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.799108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.799134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.799284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.799310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.799494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.799521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.799730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.799779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.799958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.800008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.800223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.800270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.800542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.800575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.800765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.800813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.801008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.801060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.801312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.801361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.801528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.801577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.801769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.801820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.802020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.802073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.802195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.802260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.802440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-24 19:21:34.802501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-24 19:21:34.802620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.802680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.802867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.802918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.803100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.803152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.803336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.803391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.803601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.803651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.803847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.803893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.804076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.804126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.804316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.804369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.804467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.804500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.804621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.804681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.804844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.804894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.805026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.805079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.805285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.805334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.805491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.805545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.805728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.805779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.805922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.805979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.806074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.806100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.806198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.806225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.806349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.806402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.806507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.806534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.806754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.806803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.807007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.807059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.807163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.807189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.807307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.807333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.807449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.807476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.807633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.807687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.807839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.807891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.808072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.808120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.808250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.808304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.808420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.808447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.808639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.808688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.808802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.808865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.809008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.809060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.809206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.809265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.809399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.809452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-24 19:21:34.809612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-24 19:21:34.809672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.809826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.809873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.810041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.810090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.810239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.810301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.810469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.810504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.810667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.810693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.810855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.810908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.811074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.811126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.811302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.811328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.811469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.811537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.811698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.811725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.811885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.811936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.812131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.812182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.812313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.812365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.812587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.812635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.812811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.812860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.813005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.813071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.813224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.813281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.813498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.813549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.813754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.813808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.813971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.814022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.814118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.814144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.814243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.814269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.814474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.814534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.814711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.814761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.814945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.814998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.815134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.815184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.815303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.815363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.815512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.815553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.815686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.815737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.815890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.815919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.816106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.816159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.816365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.816414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.816594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.816620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.816834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.816881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.817059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-24 19:21:34.817108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-24 19:21:34.817301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.817348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.817458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.817493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.817657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.817707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.817805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.817832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.817928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.817954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.818052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.818079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.818258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.818306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.818401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.818427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.818626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.818675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.818859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.818907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.819015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.819077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.819172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.819197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.819400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.819453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.819635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.819684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.819830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.819881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.820047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.820105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.820224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.820277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.820475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.820535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.820714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.820740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.820914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.820940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.821084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.821136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.821291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.821347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.821527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.821554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.821696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.821749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.821939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.821997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.822130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.822180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.822372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.822424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.822567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.822618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.822756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.822805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.822967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.823016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.823177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.823203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.823321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.823377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.823520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.823571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.823707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.823763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.823919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.823966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.824100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-24 19:21:34.824159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-24 19:21:34.824302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.824351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.824515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.824567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.824665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.824690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.824858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.824906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.825008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.825035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.825180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.825240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.825361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.825413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.825591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.825646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.825836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.825884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.826047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.826105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.826277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.826333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.826523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.826570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.826741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.826789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.826925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.826979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.827089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.827115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.827252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.827305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.827454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.827485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.827625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.827676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.827792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.827819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.827920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.827947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.828123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.828149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.828349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.828398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.828604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.828654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.828842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.828898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.829000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.829028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.829174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.829226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.829356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.829410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.829576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.829624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.829803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.829849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.829999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.830052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.830241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.830291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.830454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.830493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.830710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.830762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.830890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.830941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.831073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.831129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.831304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-24 19:21:34.831352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-24 19:21:34.831531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.831591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.831747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.831798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.832004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.832058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.832234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.832260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.832361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.832386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.832551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.832578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.832725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.832776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.832900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.832925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.833027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.833054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.833186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.833236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.833443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.833505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.833661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.833718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.833859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.833910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.834082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.834136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.834307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.834333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.834500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.834551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.834645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.834671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.834858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.834910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.835116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.835170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.835312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.835366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.835522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.835569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.835759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.835807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.835960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.836011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.836157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.836211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.836375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.836425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.836558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.836611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.836742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.836798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.836920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.836972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.837145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.837194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.837374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.837429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-24 19:21:34.837576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-24 19:21:34.837634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.837803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.837830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.837974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.838026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.838189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.838241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.838432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.838487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.838589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.838616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.838782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.838834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.838927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.838953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.839128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.839154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.839249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.839275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.839389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.839447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.839605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.839659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.839822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.839848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.840005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.840057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.840210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.840265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.840376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.840402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.840504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.840533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.840703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.840759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.840957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.841012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.841110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.841136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.841300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.841327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.841495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.841546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.841704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.841760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.841857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.841884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.842019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.842069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.842234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.842284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.842417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.842445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.842591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.842645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.842783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.842835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.842988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.843015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.843116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.843143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.843301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.843354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.843450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.843476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.843651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.843704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.843888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.843938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.844129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-24 19:21:34.844156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-24 19:21:34.844288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.844340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.844441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.844468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.844648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.844700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.844871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.844922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.845089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.845115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.845215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.845241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.845339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.845366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.845538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.845566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.845737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.845764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.845867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.845894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.846046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.846072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.846249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.846298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.846447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.846472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.846617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.846674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.846862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.846909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.847056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.847109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.847273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.847325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.847467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.847527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.847705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.847760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.847919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.847976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.848109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.848163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.848260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.848286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.848498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.848544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.848712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.848762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.848883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.848939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.849078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.849134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.849305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.849358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.849527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.849554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.849714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.849764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.849918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.849970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.850119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.850172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.850378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.850431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.850623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.850677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.850830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.850885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.851055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.851105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.851262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-24 19:21:34.851320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-24 19:21:34.851474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.851531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.851722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.851770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.851905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.851959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.852054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.852080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.852290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.852347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.852538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.852587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.852797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.852842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.852938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.852964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.853059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.853086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.853267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.853315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.853430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.853456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.853564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.853592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.853712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.853738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.853892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.853949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.854120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.854177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.854316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.854367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.854509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.854549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.854729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.854776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.854938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.854992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.855173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.855228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.855361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.855415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.855559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.855616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.855731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.855758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.855912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.855966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.856131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.856157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.856252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.856279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.856414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.856467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.856652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.856702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.856843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.856896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.857059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.857085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.857254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.857308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.857476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.857531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.857631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.857658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.857811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.857862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.858008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.858064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.858215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-24 19:21:34.858242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-24 19:21:34.858347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.858374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.858464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.858496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.858637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.858690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.858836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.858862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.859025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.859075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.859239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.859266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.859383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.859440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.859602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.859654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.859835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.859883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.860006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.860058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.860234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.860260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.860411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.860466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.860620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.860667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.860808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.860863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.860964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.860990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.861109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.861161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.861352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.861402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.861506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.861534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.861681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.861734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.861926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.861974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.862180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.862233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.862370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.862423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.862601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.862655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.862793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.862849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.863009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.863062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.863158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.863184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.863370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.863417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.863563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.863614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.863774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.863827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.864011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.864060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.864156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.864182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.864397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.864452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.864618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.864677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.864844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.864897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.865049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.865105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-24 19:21:34.865276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-24 19:21:34.865327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.865495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.865549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.865650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.865676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.865830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.865881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.866047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.866099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.866254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.866312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.866504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.866551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.866644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.866670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.866789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.866842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.867049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.867097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.867270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.867319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.867502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.867546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.867711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.867738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.867898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.867950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.868111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.868162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.868334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.868384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.868486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.868513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.868641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.868692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.868880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.868928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.869098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.869149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.869319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.869375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.869491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.869520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.869683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.869738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.869883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.869935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.870031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.870057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.870149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.870175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.870309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.870365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.870467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.870499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.870618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.870672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.870860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.870908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.871048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-24 19:21:34.871101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-24 19:21:34.871267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.871320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.871526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.871579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.871749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.871798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.871989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.872043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.872166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.872217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.872367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.872427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.872592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.872619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.872782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.872830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.872933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.872960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.873120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.873172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.873327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.873382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.873565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.873592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.873689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.873715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.873840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.873866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.874010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.874062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.874215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.874262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.874428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.874453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.874588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.874646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.874816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.874862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.874955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.874981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.875142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.875196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.875363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.875389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.875528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.875579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.875722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.875775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.875917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.875972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.876134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.876160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.876297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.876350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.876510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.876537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.876685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.876746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.876942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.876989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.877107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.877163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.877336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.877380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.877534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.877560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.877736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.877785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.877909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.877963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.878078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-24 19:21:34.878131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-24 19:21:34.878292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.878341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.878493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.878547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.878709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.878761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.878926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.878952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.879052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.879079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.879222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.879276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.879524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.879607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.879875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.879907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.880013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.880041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.880215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.880280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.880461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.880528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.880675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.880725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.880846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.880899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.881023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.881078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.881194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.881274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.881373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.881400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.881550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.881600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.881764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.881817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.881999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.882045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.882227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.882282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.882419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.882471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.882649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.882701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.882880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.882930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.883124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.883175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.883301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.883354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.883533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.883560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.883725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.883781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.883935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.883986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.884153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.884180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.884341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.884394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.884497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.884526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.884624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.884650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.884797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.884851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.885008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.885059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.885191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.885242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.885431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-24 19:21:34.885488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-24 19:21:34.885634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.885686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.885787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.885813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.885973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.885999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.886096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.886123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.886277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.886304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.886473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.886525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.886708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.886756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.886896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.886946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.887080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.887138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.887250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.887276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.887392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.887453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.887589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.887649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.887804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.887860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.888001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.888026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.888175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.888230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.888356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.888408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.888527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.888585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.888743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.888791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.888932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.888984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.889146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.889198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.889341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.889393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.889557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.889583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.889740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.889797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.889978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.890025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.890131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.890160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.890366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.890418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.890531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.890559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.890726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.890777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.890941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.890994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.891213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.891260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.891412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.891478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.891633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.891687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.891835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.891889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.892027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.892084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.892272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.892326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.892424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-24 19:21:34.892449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-24 19:21:34.892612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.892638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.892733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.892758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.892865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.892891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.893079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.893129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.893293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.893345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.893501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.893550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.893740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.893797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.893892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.893918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2644842 Killed "${NVMF_APP[@]}" "$@" 00:24:29.059 [2024-07-24 19:21:34.894058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.894111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.894266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.894323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 19:21:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:24:29.059 [2024-07-24 19:21:34.894430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.894457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 19:21:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:29.059 [2024-07-24 19:21:34.894622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.894676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 19:21:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:29.059 [2024-07-24 19:21:34.894848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.894900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 19:21:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:29.059 19:21:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:29.059 [2024-07-24 19:21:34.895047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.895099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.895234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.895285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.895429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.895494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.895595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.895622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.895754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.895805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.895923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.895987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.896192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.896240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.896337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.896363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.896523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.896550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.896647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.896673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.896830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.896882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.897013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.897065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.897212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.897263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.897466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.897502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.897649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.897700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.897912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.897939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.898032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.898058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.898173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.898199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.898359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.898406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.898588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.898640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-24 19:21:34.898780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-24 19:21:34.898836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.060 19:21:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2645277 00:24:29.060 19:21:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2645277 00:24:29.060 19:21:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:29.060 19:21:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2645277 ']' 00:24:29.060 [2024-07-24 19:21:34.899840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.899872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 19:21:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.060 [2024-07-24 19:21:34.900028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 19:21:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:29.060 [2024-07-24 19:21:34.900086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.900190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 19:21:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.060 [2024-07-24 19:21:34.900217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 19:21:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:29.060 [2024-07-24 19:21:34.900392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.900445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 19:21:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:29.060 [2024-07-24 19:21:34.900561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.900590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.900692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.900719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.900894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.900953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.901118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.901169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.901375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.901425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.901533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.901560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.901667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.901695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.901825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.901855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.901974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.902000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.905499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.905534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.905663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.905690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.905796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.905822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.905948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.905974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.906076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.906102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.906253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.906280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.906406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.906432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.906546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.906573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.906675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.906702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.906820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.906846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.906990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.907016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.907157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.907184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.907295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-24 19:21:34.907324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-24 19:21:34.907432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.907459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.907609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.907640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.907770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.907797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.907899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.907926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.908046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.908073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.908189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.908216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.908350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.908377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.908498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.908527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.908636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.908665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.908814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.908841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.908946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.908973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.909087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.909114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.909220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.909246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.909346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.909373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.909478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.909512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.909642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.909669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.909785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.909811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.909909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.909935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.910052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.910079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.910189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.910215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.910311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.910337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.910448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.910475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.910605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.910632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.910745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.910772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.910874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.910900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.911029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.911055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.911158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.911186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.911303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.911330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.911451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.911495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.911619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.911648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.911759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.911786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.911918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.911952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.912060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.912088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.912183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.912209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.912307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.912332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-24 19:21:34.912443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-24 19:21:34.912470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.912580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.912606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.912712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.912738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.912841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.912867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.912999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.913025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.913126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.913152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.913263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.913289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.913404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.913431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.913555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.913584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.913683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.913708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.913807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.913832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.913927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.913953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.914090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.914116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.914239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.914266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.914402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.914430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.914560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.914590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.914693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.914719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.914820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.914846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.914942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.914969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.915061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.915088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.915198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.915229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.915331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.915357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.915493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.915521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.915638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.915667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.915784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.915811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.915938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.915964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.916060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.916085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.916195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.916222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.916332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.916359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.916454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.916485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.916603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.916629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.916744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.916771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.916877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.916905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.917020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.917046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.917148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.917175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.917278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.917304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-24 19:21:34.917402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-24 19:21:34.917428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.917561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.917588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.917692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.917719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.917828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.917854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.917950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.917976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.918068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.918094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.918196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.918229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.918372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.918398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.918531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.918560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.918665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.918700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.918804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.918831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.918933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.918959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.919090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.919116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.919257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.919284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.919384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.919410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.919544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.919572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.919675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.919702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.919818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.919847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.919955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.919982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.920100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.920127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.920233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.920259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.920362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.920390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.920507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.920534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.920634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.920660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.920774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.920810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.920928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.920955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.921062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.921087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.921183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.921209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.921301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.921327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.921425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.921453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.921591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.921617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.921709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.921735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.921839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.921866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.921963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.921988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.922122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.922149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.922244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.922270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-24 19:21:34.922368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-24 19:21:34.922394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.922505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.922532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.925490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.925520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.925621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.925648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.925774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.925800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.925930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.925955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.926054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.926080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.926177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.926202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.926305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.926330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.926439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.926466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.926603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.926629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.926756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.926781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.926913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.926938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.927035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.927061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.927160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.927186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.927288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.927323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.927456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.927490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.927599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.927628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.927732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.927758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.927860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.927886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.927983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.928008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.928150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.928177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.928273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.928299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.928428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.928453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.928591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.928617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.928748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.928774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.928882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.928909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.929029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.929057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.929154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.929180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.929284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.929314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.929414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.929440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.929556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.929585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.929703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.929733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.929830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.929856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.930002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.930031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.930161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.930189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-24 19:21:34.930284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-24 19:21:34.930310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.930441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.930469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.930618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.930645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.930748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.930774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.930872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.930899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.931000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.931028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.931147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.931173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.931310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.931337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.931433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.931459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.931560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.931586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.931690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.931717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.934491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.934524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.934649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.934678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.934802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.934830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.934931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.934958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.935061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.935087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.935222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.935256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.935375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.935404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.935542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.935571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.935676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.935702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.935810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.935836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.935972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.935998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.936133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.936158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.936260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.936290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.936420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.936446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.936614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.936641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.936776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.936803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.936943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.936972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.937111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.937138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.937274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.937300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.937433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.937460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-24 19:21:34.937582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-24 19:21:34.937614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.937751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.937779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.937912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.937942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.938055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.938081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.938182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.938209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.938309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.938336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.938465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.938507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.938613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.938639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.938740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.938766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.938870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.938895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.939032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.939059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.939195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.939222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.939323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.939349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.939446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.939472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.939614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.939642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.939745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.939772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.939886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.939913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.940032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.940062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.940167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.940196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.940304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.940330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.940434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.940460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.940566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.940591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.940696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.940722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.940855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.940881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.941007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.941033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.941135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.941161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.941289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.941318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.941423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.941452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.941574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.941608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.941742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.941770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.941872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.941898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.942018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.942044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.942146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.942173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.942280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.942309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.942417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.942445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.942584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-24 19:21:34.942612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-24 19:21:34.942713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.942740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.942847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.942875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.943002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.943032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.943161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.943188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.943287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.943312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.943405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.943431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.943539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.943574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.943704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.943729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.943860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.943887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.943990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.944015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.944121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.944150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.944255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.944283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.944398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.944424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.944540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.944567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.944667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.944693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.944827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.944852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.944969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.944996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.945093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.945119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.945213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.945238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.945337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.945363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.945469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.945501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.945605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.945631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.945776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.945801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.945910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.945936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.946042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.946069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.946193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.946219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.946319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.946345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.946467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.946500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.946612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.946638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.946766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.946791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.946892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.946917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.947033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.947059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.947179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.947208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.947338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.947372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.947493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.947522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.947636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-24 19:21:34.947663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-24 19:21:34.947768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.947796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.947902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.947928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.948024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.948050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.948145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.948171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.948307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.948332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.948433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.948459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.948563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.948592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.948752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.948784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.948891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.948919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.949016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.949043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.949143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.949169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.949280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.949307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.949407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.949433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.949536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.949563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.949701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.949727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.949820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.949846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.949943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.949969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.950082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.950108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.950248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.950277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.950382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.950412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.950535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.950562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.950662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.950689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.950791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.950817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.950926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.950953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.951061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.951088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.951189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.951215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.951325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.951352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.951469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.951505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.951614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.951643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.951783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.951810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.951913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.951940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.952043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.952071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.952176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.952202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.952335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.952361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.952459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.952490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-24 19:21:34.952596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-24 19:21:34.952622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.952724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.952751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.952851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.952883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.952986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.953012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.953144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.953172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.953274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.953301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.953404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.953431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.953537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.953565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.953697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.953724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.953827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.953854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.953946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.953972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.954090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.954116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.954237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.954265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.954360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.954388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.954501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.954528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.954641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.954666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.954783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.954809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.954943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.954970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.955077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.955104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.955223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.955251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.955352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.955378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.955489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.955515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.955625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.955653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.955779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.955806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.955915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.955942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.956048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.956076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.956199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.956225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.956353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.956378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.956504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.956530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.956652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.956677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.956771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.956796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.956898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.956925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.957022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.957047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.957150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.957176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.957272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.957298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.957396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-24 19:21:34.957423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-24 19:21:34.957528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.957555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.957657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.957686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.957785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.957811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.957911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.957937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.958032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.958058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.958150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.958177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.958286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.958316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.958444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.958470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.958581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.958608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.958721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.958756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.958866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.958894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.958992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.959018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.959130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.959155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.959258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.959285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.959385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.959412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.959503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.959530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.959648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.959674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.959786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.959812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.959943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.959969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.960069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.960096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.960196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.960223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.960320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.960346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.960458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.960488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.960588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.960580] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:24:29.070 [2024-07-24 19:21:34.960615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.960652] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.070 [2024-07-24 19:21:34.960744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.960769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.960882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.960907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.961020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.961044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.961147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.961172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.961269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.961295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-24 19:21:34.961403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-24 19:21:34.961429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.961543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.961570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.961669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.961695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.961812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.961839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.961945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.961974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.962087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.962120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.962218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.962243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.962340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.962366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.962501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.962527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.962634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.962660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.962800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.962826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.962958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.962994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.963136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.963170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.963300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.963327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.963431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.963457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.963572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.963599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.963716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.963748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.963862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.963891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.963991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.964018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.964128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.964154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.964253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.964280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.964415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.964443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.964563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.964590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.964693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.964720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.964820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.964847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.964974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.965000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.965128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.965154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.965266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.965292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.965386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.965413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.965520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.965547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.965682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.965709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.965846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.965872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.965974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.966001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.966095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.966122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.966263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.966289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.966396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-24 19:21:34.966423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-24 19:21:34.966525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.966552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.966646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.966672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.966777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.966805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.966899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.966926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.967019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.967045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.967172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.967198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.967301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.967328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.967433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.967460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.967609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.967639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.967740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.967767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.967869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.967895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.968007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.968033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.968140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.968166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.968261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.968286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.968400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.968425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.968533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.968559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.968690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.968717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.968812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.968838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.968946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.968971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.969070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.969097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.969230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.969262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.969363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.969389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.969495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.969522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.969632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.969658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.969784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.969810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.969910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.969936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.970038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.970065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.970197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.970223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.970318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.970344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.970443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.970470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.970609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.970635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.970758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.970784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.970899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.970925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.971019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.971045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.971143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.971176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.971287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.971313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.971405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.971431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.971541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-24 19:21:34.971568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-24 19:21:34.971664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.971690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.971805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.971831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.971974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.972000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.972100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.972128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.972237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.972263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.972376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.972403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.972512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.972541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.972639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.972666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.972789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.972815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.972920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.972946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.973046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.973073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.973206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.973231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.973324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.973350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.973445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.973471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.973597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.973623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.973722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.973749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.973850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.973877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.973975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.974001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.974092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.974118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.974214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.974242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.974358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.974384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.974552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.974580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.974683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.974714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.974870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.974896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.974989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.975014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.975123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.975150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.975263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.975289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.975400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.975426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.975539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.975566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.975694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.975721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.975828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.975858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.975963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.975990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.976115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.976141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.976254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.976281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.976397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.976423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.976518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.976546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.976647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.976674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.976777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.976804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.976933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-24 19:21:34.976959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-24 19:21:34.977093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.977122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.977223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.977250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.977363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.977389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.977535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.977561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.977688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.977715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.977842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.977867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.977979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.978005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.978102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.978129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.978220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.978246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.978354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.978380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.978491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.978519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.978622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.978649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.978757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.978785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.978892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.978917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.979044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.979071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.979167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.979193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.979288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.979314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.979414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.979440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.979569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.979595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.979687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.979713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.979812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.979839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.979936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.979965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.980064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.980089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.980189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.980221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.980349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.980375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.980501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.980529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.980627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.980652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.980754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.980782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.980886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.980913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.981010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.981036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.981153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.981180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.981282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.981308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-24 19:21:34.981401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-24 19:21:34.981427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.981537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.981564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.981661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.981688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.981820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.981847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.981946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.981974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.982103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.982131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.982259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.982285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.982384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.982410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.982509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.982535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.982628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.982654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.982771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.982798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.982906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.982932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.983041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.983067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.983168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.983197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.983295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.983322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.983415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.983441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.983560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.983586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.983694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.983720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.983825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.983851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.983952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.983979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.984080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.984108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.984231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.984257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.984366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.984392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.984503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.984530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.984625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.984651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.984757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.984785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.984916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.984943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.985070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.985097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.985197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.985223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.985320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.985346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.985447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.985474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.985583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.985613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.985745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.985771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.985867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.985892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.985986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.986012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.986119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.986145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.986245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.986272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.986372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.986399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.986496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.986523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.986623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.986648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.986738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.986764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-24 19:21:34.986866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-24 19:21:34.986893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.986987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.987013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.987104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.987130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.987222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.987248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.987377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.987403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.987506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.987533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.987629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.987654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.987750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.987775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.987899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.987924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.988022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.988047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.988156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.988182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.988275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.988301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.988400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.988426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.988522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.988549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.988651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.988680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.988778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.988804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.988935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.988961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.989073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.989099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.989194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.989220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.989312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.989337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.989443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.989469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.989566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.989592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.989692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.989718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.989819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.989844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.989938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.989963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.990061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.990088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.990203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.990232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.990331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.990358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.990450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.990476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.990581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.990608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.990707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.990737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.990829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.990855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.990952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.990978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.991080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.991107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.991209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.991236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.991333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.991359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.991454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.991485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.991587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.991613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.991742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.991767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.991892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.991917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.992025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.992050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.992142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-24 19:21:34.992167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-24 19:21:34.992267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.992294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.992396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.992423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.992536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.992563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.992662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.992689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.992787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.992813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.992914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.992943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.993071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.993097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.993199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.993226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.993322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.993348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.993469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.993506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.993605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.993632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.993741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.993768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.993865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.993892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.993997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.994024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.994122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.994149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.994261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.994298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.994417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.994449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.994569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.994597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.994694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.994720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.994812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.994837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.994941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.994968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.995066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.995093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.995222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.995249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.995362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.995389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.995501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.995530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.995632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.995659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.995760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.995787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.995883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.995909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.996034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.996065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.996191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.996217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.996312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.996338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.996428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.996454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.996585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.996611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.996705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.996731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.996835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.996861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.996991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.997017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.997119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.997146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.997255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.997281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.997407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.997433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.997538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.997566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.997660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.997685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.997810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-24 19:21:34.997835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-24 19:21:34.997944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:34.997971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:34.998061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:34.998087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:34.998211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:34.998237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:34.998367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:34.998397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:34.998500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:34.998528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:34.998621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:34.998647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:34.998781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:34.998808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:34.998919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:34.998945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:34.999041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:34.999066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:34.999165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:34.999193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:34.999292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:34.999318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:34.999416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:34.999441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:34.999546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:34.999573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:34.999708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:34.999736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.078 [2024-07-24 19:21:34.999862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:34.999888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:34.999990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.000016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.000115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.000141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.000234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.000260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.000354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.000379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.000472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.000503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.000597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.000623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.000727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.000754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.000853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.000880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.001008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.001035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.001168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.001197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.001302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.001329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.001433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.001465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.001571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.001598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.001702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.001729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.001825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.001851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.001950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.001977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.002074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.002100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.002197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.002223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.002317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.002343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.002435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.002461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.002573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.002599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.002694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.002720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.002821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.002847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.002950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.002975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.003095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.003120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.003217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.003243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.078 qpair failed and we were unable to recover it. 00:24:29.078 [2024-07-24 19:21:35.003345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.078 [2024-07-24 19:21:35.003371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.003476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.003511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.003639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.003665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.003765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.003791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.003919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.003947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.004084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.004110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.004210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.004236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.004335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.004362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.004458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.004490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.004596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.004624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.004730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.004757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.004859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.004886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.005007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.005033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.005135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.005161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.005270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.005296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.005408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.005435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.005572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.005601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.005715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.005741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.005836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.005861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.005958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.005985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.006129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.006155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.006258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.006283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.006375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.006401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.006509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.006536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.006634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.006661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.079 qpair failed and we were unable to recover it. 00:24:29.079 [2024-07-24 19:21:35.006786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.079 [2024-07-24 19:21:35.006818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.006931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.006957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.007079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.007105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.007198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.007224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.007321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.007346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.007446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.007473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.007586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.007613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.007721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.007748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.007848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.007875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.007984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.008011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.008122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.008149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.008264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.008290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.008386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.008413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.008516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.008543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.008674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.008701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.008808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.008834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.008936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.008964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.009082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.009108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.009202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.009228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.009323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.009349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.009457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.009489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.009598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.009624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.009724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.009750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.009859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.009885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.009999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.010028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.010130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.010157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.010262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.010289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.010409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.010436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.010535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.010562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.010654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.010680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.010778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.010803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.010900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.010928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.011042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.011068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.011175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.011201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.011315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.011341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.011436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.080 [2024-07-24 19:21:35.011462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.080 qpair failed and we were unable to recover it. 00:24:29.080 [2024-07-24 19:21:35.011570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.011597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.011693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.011718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.011821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.011849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.011949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.011977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.012078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.012108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.012213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.012240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.012342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.012372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.012489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.012516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.012612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.012639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.012738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.012764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.012866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.012893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.013005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.013031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.013131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.013159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.013275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.013315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.013424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.013454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.013581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.013609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.013724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.013751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.013848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.013874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.013988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.014014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.014116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.014143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.014244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.014271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.014378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.014407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.014520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.014548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.014640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.014665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.014766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.014792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.014887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.014912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.015024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.015051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.015153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.015180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.015290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.015318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.015416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.015442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.015562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.015588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.015690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.015718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.015825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.015851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.015956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.015981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.016078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.016104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.016198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.081 [2024-07-24 19:21:35.016223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.081 qpair failed and we were unable to recover it. 00:24:29.081 [2024-07-24 19:21:35.016316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.016341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.016442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.016474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.016584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.016610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.016703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.016729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.016836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.016861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.016958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.016983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.017080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.017106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.017204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.017229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.017327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.017354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.017462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.017503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.017616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.017642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.017740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.017766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.017875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.017905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.018004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.018031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.018139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.018167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.018281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.018308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.018418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.018446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.018555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.018583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.018692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.018719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.018825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.018851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.018944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.018970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.019066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.019095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.019197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.019230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.019335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.019361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.019458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.019490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.019608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.019635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.082 [2024-07-24 19:21:35.019735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.082 [2024-07-24 19:21:35.019762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.082 qpair failed and we were unable to recover it. 00:24:29.351 [2024-07-24 19:21:35.019857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.351 [2024-07-24 19:21:35.019884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.351 qpair failed and we were unable to recover it. 00:24:29.351 [2024-07-24 19:21:35.019999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.351 [2024-07-24 19:21:35.020025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.351 qpair failed and we were unable to recover it. 00:24:29.351 [2024-07-24 19:21:35.020126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.351 [2024-07-24 19:21:35.020152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.351 qpair failed and we were unable to recover it. 00:24:29.351 [2024-07-24 19:21:35.020250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.351 [2024-07-24 19:21:35.020276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.351 qpair failed and we were unable to recover it. 00:24:29.351 [2024-07-24 19:21:35.020376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.351 [2024-07-24 19:21:35.020404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.351 qpair failed and we were unable to recover it. 00:24:29.351 [2024-07-24 19:21:35.020506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.020532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.020640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.020667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.020775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.020802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.020901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.020929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.021031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.021057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.021167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.021195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.021297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.021324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.021424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.021450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.021553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.021579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.021693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.021719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.021835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.021863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.021963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.021990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.022099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.022125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.022225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.022252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.022363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.022389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.022504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.022531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.022625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.022651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.022760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.022788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.022884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.022911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.023015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.023043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.023144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.023171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.023270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.023296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.023392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.023418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.023515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.023541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.023638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.023664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.023768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.023795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.023887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.023913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.024013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.024038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.024149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.024175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.024289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.024315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.024411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.024441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.024543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.352 [2024-07-24 19:21:35.024570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.352 qpair failed and we were unable to recover it. 00:24:29.352 [2024-07-24 19:21:35.024670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.024696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.024809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.024835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.024946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.024972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.025063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.025089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.025188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.025215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.025311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.025337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.025462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.025497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.025603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.025630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.025734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.025763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.025863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.025890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.026000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.026027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.026139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.026166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.026274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.026301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.026405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.026431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.026526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.026553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.026652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.026677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.026769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.026795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.026893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.026918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.027026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.027052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.027152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.027179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.027273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.027299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.027398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.027426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.027524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.027551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.027654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.027682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.027777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.027804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.027920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.027957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.028067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.028093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.028201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.028227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.028336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.028362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.028460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.028491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.028597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.028624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.028735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.028761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.028863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.028889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.028990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.029016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.029117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.029143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.353 [2024-07-24 19:21:35.029239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.353 [2024-07-24 19:21:35.029266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.353 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.029363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.029389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.029494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.029522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.029622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.029652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.029748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.029774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.029874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.029901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.029992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.030018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.030110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.030135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.030231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.030258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.030355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.030383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.030487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.030513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.030609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.030636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.030736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.030762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.030865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.030891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.031011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.031038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.031141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.031167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.031272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.031300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.031400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.031426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.031517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.031544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.031641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.031670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.031771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.031797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.031890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.031916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.032017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.032043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.032137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.032163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.032278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.032305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.032401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.032427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.032529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.032557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.032668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.032693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.032794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.032820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.032911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.032937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.033041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.033073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.033183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.033210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.354 qpair failed and we were unable to recover it. 00:24:29.354 [2024-07-24 19:21:35.033288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:29.354 [2024-07-24 19:21:35.033313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.354 [2024-07-24 19:21:35.033339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.033434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.033460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.033561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.033587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.033695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.033721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.033825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.033851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.033968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.033994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.034089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.034114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.034211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.034236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.034329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.034354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.034447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.034472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.034593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.034619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.034722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.034750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.034848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.034873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.034969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.034995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.035111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.035140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.035245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.035273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.035395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.035421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.035524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.035552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.035653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.035680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.035772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.035798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.035896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.035923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.036030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.036056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.036155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.036181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.036288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.036314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.036421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.036450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.036579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.036617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.036738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.036765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.036881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.036906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.037014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.037040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.037143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.037169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.037263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.037291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.037390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.037416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.037538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.037566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.037678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.037706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.037828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.037854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.037951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.355 [2024-07-24 19:21:35.037977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.355 qpair failed and we were unable to recover it. 00:24:29.355 [2024-07-24 19:21:35.038071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.038097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.038202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.038229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.038331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.038365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.038468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.038503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.038608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.038635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.038732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.038758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.038880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.038906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.039008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.039037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.039137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.039164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.039267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.039294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.039414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.039439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.039551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.039577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.039674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.039699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.039792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.039817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.039920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.039947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.040056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.040084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.040192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.040219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.040327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.040355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.040463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.040496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.040600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.040629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.040731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.040757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.040859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.040887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.040997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.041022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.041118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.041144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.041248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.041273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.041388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.041413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.041528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.041553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.041656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.041682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.041794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.041820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.356 qpair failed and we were unable to recover it. 00:24:29.356 [2024-07-24 19:21:35.041920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.356 [2024-07-24 19:21:35.041950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.042049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.042074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.042172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.042197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.042295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.042319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.042423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.042452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.042566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.042594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.042689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.042715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.042820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.042847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.042945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.042971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.043069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.043096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.043200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.043228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.043324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.043351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.043450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.043477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.043598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.043625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.043744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.043769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.043866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.043892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.043991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.044016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.044108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.044133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.044257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.044284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.044387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.044413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.044506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.044533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.044631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.044657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.044757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.044784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.044891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.044921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.045033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.045060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.045156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.045182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.045283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.045310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.045409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.045441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.045560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.045589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.045696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.045723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.357 [2024-07-24 19:21:35.045826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.357 [2024-07-24 19:21:35.045852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.357 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.045966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.045992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.046092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.046117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.046211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.046235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.046349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.046375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.046487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.046512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.046612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.046638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.046734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.046758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.046858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.046883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.046990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.047015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.047118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.047146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.047252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.047281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.047381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.047407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.047505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.047531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.047634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.047659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.047758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.047784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.047884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.047911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.048004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.048030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.048142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.048168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.048275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.048302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.048406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.048435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.048541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.048568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.048666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.048692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.048789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.048817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.048928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.048958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.049063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.049090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.049189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.049217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.049313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.049338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.049445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.049473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.049582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.049609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.049710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.049736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.049853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.049881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.049985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.050014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.050109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.050136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.050235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.050262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.050377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.050403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.050507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.358 [2024-07-24 19:21:35.050533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.358 qpair failed and we were unable to recover it. 00:24:29.358 [2024-07-24 19:21:35.050625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.050652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.050754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.050782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.050878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.050905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.051022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.051049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.051166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.051194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.051292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.051318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.051437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.051464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.051570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.051598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.051697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.051723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.051818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.051845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.051950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.051977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.052093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.052119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.052222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.052251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.052366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.052392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.052499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.052526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.052627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.052653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.052750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.052776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.052888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.052916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.053025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.053051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.053149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.053178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.053288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.053313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.053415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.053441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.053556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.053582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.053677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.053702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.053798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.053823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.053918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.053944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.054039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.054064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.054159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.054189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.054287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.054312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.054402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.054428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.054531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.054557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.054661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.054690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.054793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.054819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.054923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.054950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.055054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.055082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.055177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.359 [2024-07-24 19:21:35.055203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.359 qpair failed and we were unable to recover it. 00:24:29.359 [2024-07-24 19:21:35.055299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.055326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.055418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.055445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.055556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.055584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.055693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.055720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.055820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.055847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.055950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.055977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.056080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.056109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.056207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.056233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.056334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.056361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.056475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.056507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.056605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.056632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.056731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.056757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.056859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.056885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.056997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.057023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.057134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.057160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.057253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.057278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.057374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.057400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.057505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.057530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.057632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.057668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.057771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.057801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.057906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.057933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.058033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.058060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.058156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.058182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.058295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.058322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.058437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.058463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.058580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.058607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.058709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.058736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.058846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.058873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.058966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.058994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.059091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.059117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.059216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.059245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.059341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.059367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.059493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.059521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.059622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.059648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.059744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.059770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.059885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.360 [2024-07-24 19:21:35.059911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.360 qpair failed and we were unable to recover it. 00:24:29.360 [2024-07-24 19:21:35.060013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.060039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.060135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.060161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.060269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.060296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.060388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.060415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.060521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.060549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.060643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.060669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.060768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.060795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.060887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.060912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.061023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.061049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.061152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.061180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.061278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.061304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.061399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.061424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.061517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.061544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.061639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.061664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.061755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.061781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.061878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.061903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.061992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.062017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.062125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.062150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.062265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.062293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.062394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.062424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.062531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.062558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.062671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.062698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.062812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.062838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.062953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.062979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.063094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.063122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.063235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.063262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.063375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.063401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.063506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.063533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.063630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.063656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.063759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.063785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.063887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.361 [2024-07-24 19:21:35.063915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.361 qpair failed and we were unable to recover it. 00:24:29.361 [2024-07-24 19:21:35.064022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.064049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.064146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.064173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.064269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.064295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.064402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.064429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.064545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.064571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.064669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.064697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.064799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.064825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.064928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.064956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.065054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.065080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.065188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.065214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.065314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.065340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.065438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.065464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.065580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.065606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.065711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.065737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.065835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.065861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.065964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.065991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.066113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.066139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.066249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.066277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.066374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.066407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.066518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.066545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.066644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.066670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.066786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.066813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.066911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.066937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.067034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.067061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.067167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.067194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.067299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.067324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.067422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.067449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.067564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.067590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.067686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.067712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.067821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.067846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.067958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.362 [2024-07-24 19:21:35.067984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.362 qpair failed and we were unable to recover it. 00:24:29.362 [2024-07-24 19:21:35.068098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.068123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.068221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.068247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.068349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.068378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.068471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.068505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.068602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.068628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.068721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.068747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.068849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.068874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.068968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.068994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.069093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.069121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.069216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.069242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.069341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.069368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.069469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.069504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.069621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.069647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.069741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.069768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.069872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.069905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.070006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.070032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.070143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.070168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.070269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.070294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.070389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.070414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.070511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.070538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.070649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.070675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.070773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.070799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.070912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.070937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.071037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.071063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.071158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.071183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.071278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.071307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.071412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.071439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.071544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.071572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.071666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.071693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.071783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.071810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.071913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.071939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.072039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.072066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.072162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.072189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.072297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.072323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.072424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.072452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.072552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.363 [2024-07-24 19:21:35.072577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.363 qpair failed and we were unable to recover it. 00:24:29.363 [2024-07-24 19:21:35.072685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.072711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.072805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.072829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.072935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.072961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.073056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.073081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.073191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.073219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.073317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.073350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.073446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.073472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.073577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.073604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.073694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.073719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.073819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.073846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.073946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.073972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.074080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.074110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.074214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.074245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.074337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.074363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.074461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.074508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.074613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.074639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.074732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.074758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.074868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.074895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.074996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.075022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.075121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.075150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.075254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.075281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.075383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.075409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.075517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.075543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.075640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.075666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.075769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.075794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.075915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.075942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.076038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.076064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.076163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.076188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.076276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.076301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.076397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.076423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.076528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.076553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.076657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.076684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.364 [2024-07-24 19:21:35.076795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.364 [2024-07-24 19:21:35.076825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.364 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.076921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.076947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.077051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.077077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.077192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.077218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.077313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.077338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.077434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.077460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.077576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.077601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.077703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.077729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.077826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.077852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.077963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.077993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.078090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.078117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.078219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.078248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.078351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.078378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.078476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.078507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.078627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.078653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.078750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.078778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.078874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.078899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.079000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.079025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.079118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.079143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.079241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.079266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.079376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.079401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.079512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.079539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.079638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.079664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.079768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.079795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.079901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.079927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.080023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.080048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.080152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.080181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.080277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.080305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.080407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.080432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.080551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.080578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.080686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.080711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.080808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.080833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.080938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.080967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.081070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.081098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.081204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.081230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.081328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.081355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.081456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.365 [2024-07-24 19:21:35.081488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.365 qpair failed and we were unable to recover it. 00:24:29.365 [2024-07-24 19:21:35.081608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.081635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.081743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.081770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.081864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.081890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.081992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.082019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.082137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.082165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.082271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.082296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.082391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.082417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.082518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.082544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.082659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.082688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.082787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.082814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.082910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.082937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.083039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.083066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.083168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.083195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.083306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.083334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.083427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.083453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.083559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.083586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.083686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.083712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.083815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.083843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.083943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.083969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.084066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.084091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.084189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.084215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.084309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.084334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.084430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.084456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.084555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.084581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.084682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.084708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.084812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.084841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.084938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.084964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.085061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.085088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.085184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.085210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.085309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.085335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.085440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.085467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.085593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.085621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.085721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.085750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.085848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.366 [2024-07-24 19:21:35.085874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.366 qpair failed and we were unable to recover it. 00:24:29.366 [2024-07-24 19:21:35.085984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.086010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.086111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.086137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.086249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.086275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.086372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.086398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.086503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.086531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.086632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.086658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.086752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.086779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.086882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.086910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.087012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.087038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.087137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.087164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.087279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.087304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.087396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.087422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.087558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.087584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.087691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.087717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.087848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.087874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.087982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.088011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.088127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.088155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.088274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.088300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.088414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.088440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.088551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.088578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.088706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.088733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.088868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.088896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.088993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.089018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.089114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.089144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.089244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.089269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.089394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.089420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.089521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.089547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.367 [2024-07-24 19:21:35.089644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.367 [2024-07-24 19:21:35.089673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.367 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.089773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.089800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.089944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.089970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.090085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.090113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.090208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.090234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.090335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.090361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.090476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.090509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.090611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.090639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.090773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.090800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.090897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.090924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.091060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.091087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.091220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.091248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.091364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.091391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.091495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.091523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.091656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.091683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.091817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.091845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.091950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.091976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.092075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.092102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.092201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.092229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.092335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.092362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.092463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.092496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.092597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.092624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.092737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.092764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.092874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.092905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.093018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.093044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.093150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.093176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.093280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.093306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.093417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.093446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.093577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.093606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.093717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.093743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.093845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.093870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.093968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.093993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.094089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.094113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.094218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.094245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.094348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.094374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.094483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.094509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.368 [2024-07-24 19:21:35.094651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.368 [2024-07-24 19:21:35.094677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.368 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.094819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.094845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.094967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.094995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.095091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.095117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.095207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.095233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.095331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.095357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.095487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.095516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.095621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.095647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.095778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.095803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.095930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.095956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.096053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.096078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.096170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.096209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.096320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.096348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.096498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.096528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05f4000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.096661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.096699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 A controller has encountered a failure and is being reset. 00:24:29.369 [2024-07-24 19:21:35.096849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.096877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.097003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.097029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.097141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.097167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.097291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.097316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.097429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.097456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.097612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.097640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.097778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.097806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.097916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.097943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.098046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.098073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.098174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.098200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.098317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.098343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.098458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.098491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.098593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.098626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.098741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.098767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.098898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.098924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.099058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.099084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.099190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.099218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.099318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.099345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.099469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.099503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.099650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.099677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.099805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.369 [2024-07-24 19:21:35.099830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.369 qpair failed and we were unable to recover it. 00:24:29.369 [2024-07-24 19:21:35.099932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.370 [2024-07-24 19:21:35.099958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.370 qpair failed and we were unable to recover it. 00:24:29.370 [2024-07-24 19:21:35.100059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.370 [2024-07-24 19:21:35.100086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.370 qpair failed and we were unable to recover it. 00:24:29.370 [2024-07-24 19:21:35.100209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.370 [2024-07-24 19:21:35.100235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.370 qpair failed and we were unable to recover it. 00:24:29.370 [2024-07-24 19:21:35.100334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.370 [2024-07-24 19:21:35.100361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.370 qpair failed and we were unable to recover it. 00:24:29.370 [2024-07-24 19:21:35.100460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.370 [2024-07-24 19:21:35.100493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.370 qpair failed and we were unable to recover it. 00:24:29.370 [2024-07-24 19:21:35.100604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.370 [2024-07-24 19:21:35.100631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.370 qpair failed and we were unable to recover it. 00:24:29.370 [2024-07-24 19:21:35.100736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.370 [2024-07-24 19:21:35.100763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.370 qpair failed and we were unable to recover it. 00:24:29.370 [2024-07-24 19:21:35.100870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.370 [2024-07-24 19:21:35.100897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.370 qpair failed and we were unable to recover it. 00:24:29.370 [2024-07-24 19:21:35.101004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.370 [2024-07-24 19:21:35.101030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.370 qpair failed and we were unable to recover it. 00:24:29.370 [2024-07-24 19:21:35.101141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.370 [2024-07-24 19:21:35.101167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.370 qpair failed and we were unable to recover it. 00:24:29.370 [2024-07-24 19:21:35.101266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.370 [2024-07-24 19:21:35.101293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.370 qpair failed and we were unable to recover it. 00:24:29.370 [2024-07-24 19:21:35.101388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.370 [2024-07-24 19:21:35.101414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f05fc000b90 with addr=10.0.0.2, port=4420 00:24:29.370 qpair failed and we were unable to recover it. 00:24:29.370 [2024-07-24 19:21:35.101548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.370 [2024-07-24 19:21:35.101579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.370 qpair failed and we were unable to recover it. 00:24:29.370 [2024-07-24 19:21:35.101713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.370 [2024-07-24 19:21:35.101739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.370 qpair failed and we were unable to recover it. 00:24:29.370 [2024-07-24 19:21:35.101844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.370 [2024-07-24 19:21:35.101871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.370 qpair failed and we were unable to recover it. 00:24:29.370 [2024-07-24 19:21:35.101974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.370 [2024-07-24 19:21:35.102001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.370 qpair failed and we were unable to recover it. 00:24:29.370 [2024-07-24 19:21:35.102115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.370 [2024-07-24 19:21:35.102141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.370 qpair failed and we were unable to recover it. 00:24:29.370 [2024-07-24 19:21:35.102238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.370 [2024-07-24 19:21:35.102264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42120 with addr=10.0.0.2, port=4420 00:24:29.370 qpair failed and we were unable to recover it. 00:24:29.370 [2024-07-24 19:21:35.102390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.370 [2024-07-24 19:21:35.102427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0604000b90 with addr=10.0.0.2, port=4420 00:24:29.370 qpair failed and we were unable to recover it. 00:24:29.370 [2024-07-24 19:21:35.102604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.370 [2024-07-24 19:21:35.102648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd50190 with addr=10.0.0.2, port=4420 00:24:29.370 [2024-07-24 19:21:35.102668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd50190 is same with the state(5) to be set 00:24:29.370 [2024-07-24 19:21:35.102697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd50190 (9): Bad file descriptor 00:24:29.370 [2024-07-24 19:21:35.102718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.370 [2024-07-24 19:21:35.102734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.370 [2024-07-24 19:21:35.102753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.370 Unable to reset the controller. 00:24:29.370 [2024-07-24 19:21:35.153960] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.370 [2024-07-24 19:21:35.154022] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.370 [2024-07-24 19:21:35.154038] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.370 [2024-07-24 19:21:35.154051] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.370 [2024-07-24 19:21:35.154062] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.370 [2024-07-24 19:21:35.154145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:29.370 [2024-07-24 19:21:35.154198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:29.370 [2024-07-24 19:21:35.154248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:29.370 [2024-07-24 19:21:35.154251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:29.370 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:29.370 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:24:29.370 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:29.370 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:29.370 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:29.370 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.370 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:29.370 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.370 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:29.370 Malloc0 00:24:29.370 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.370 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:29.370 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.370 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:29.370 [2024-07-24 19:21:35.320225] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.370 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.370 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:29.370 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.370 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:29.371 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.371 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:29.371 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.371 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:29.371 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.371 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:29.371 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.371 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:29.371 [2024-07-24 19:21:35.348492] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.371 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.371 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:29.371 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.371 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:29.631 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.631 19:21:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2644955 00:24:30.201 Controller properly reset. 00:24:35.473 Initializing NVMe Controllers 00:24:35.473 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:35.473 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:35.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:35.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:35.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:35.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:35.473 Initialization complete. Launching workers. 00:24:35.473 Starting thread on core 1 00:24:35.473 Starting thread on core 2 00:24:35.473 Starting thread on core 3 00:24:35.473 Starting thread on core 0 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:24:35.473 00:24:35.473 real 0m10.715s 00:24:35.473 user 0m33.384s 00:24:35.473 sys 0m8.071s 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:35.473 ************************************ 00:24:35.473 END TEST nvmf_target_disconnect_tc2 00:24:35.473 ************************************ 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:35.473 rmmod nvme_tcp 00:24:35.473 rmmod nvme_fabrics 00:24:35.473 rmmod nvme_keyring 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2645277 ']' 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2645277 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2645277 ']' 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 2645277 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2645277 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:24:35.473 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:24:35.474 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2645277' 00:24:35.474 killing process with pid 2645277 00:24:35.474 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 2645277 00:24:35.474 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 2645277 00:24:35.734 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:35.734 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:35.734 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:35.734 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:35.734 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:35.734 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.734 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.734 19:21:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.643 19:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:37.643 00:24:37.643 real 0m15.158s 00:24:37.643 user 0m58.190s 00:24:37.643 sys 0m10.429s 00:24:37.643 19:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:37.643 19:21:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:37.643 ************************************ 00:24:37.643 END TEST nvmf_target_disconnect 00:24:37.643 ************************************ 00:24:37.643 19:21:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:37.643 00:24:37.643 real 4m58.581s 00:24:37.643 user 10m56.838s 00:24:37.643 sys 1m10.372s 00:24:37.643 19:21:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:37.643 19:21:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.643 ************************************ 00:24:37.643 END TEST nvmf_host 00:24:37.643 ************************************ 00:24:37.643 00:24:37.643 real 19m35.090s 00:24:37.643 user 46m58.787s 00:24:37.643 sys 4m38.370s 00:24:37.643 19:21:43 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:37.643 19:21:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:37.643 ************************************ 00:24:37.643 END TEST nvmf_tcp 00:24:37.643 ************************************ 00:24:37.643 19:21:43 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:24:37.643 19:21:43 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:37.643 19:21:43 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:37.643 19:21:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:37.643 19:21:43 -- common/autotest_common.sh@10 -- # set +x 00:24:37.901 ************************************ 00:24:37.901 START TEST spdkcli_nvmf_tcp 00:24:37.901 ************************************ 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:37.901 * Looking for test storage... 00:24:37.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2646219 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2646219 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 2646219 ']' 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:37.901 19:21:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:37.901 [2024-07-24 19:21:43.783470] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:24:37.901 [2024-07-24 19:21:43.783582] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2646219 ] 00:24:37.901 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.901 [2024-07-24 19:21:43.844516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:38.159 [2024-07-24 19:21:43.962453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.159 [2024-07-24 19:21:43.962459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.159 19:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:38.159 19:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:24:38.159 19:21:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:38.159 19:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:38.159 19:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:38.159 19:21:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:38.159 19:21:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:38.159 19:21:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:38.159 19:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:38.159 19:21:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:38.159 19:21:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:38.159 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:38.159 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:38.159 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:38.159 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:38.159 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:38.159 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:38.159 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:38.159 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:38.159 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:38.159 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:38.159 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:38.159 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:38.159 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:38.159 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:38.159 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:38.159 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:38.159 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:38.159 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:38.159 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:38.159 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:38.159 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:38.159 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:38.159 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:38.159 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:38.159 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:38.159 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:38.159 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:38.159 ' 00:24:40.696 [2024-07-24 19:21:46.691232] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.076 [2024-07-24 19:21:47.931417] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:24:44.626 [2024-07-24 19:21:50.222465] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:24:46.526 [2024-07-24 19:21:52.200572] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:24:47.899 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:24:47.899 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:24:47.899 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:24:47.899 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:24:47.899 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:24:47.899 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:24:47.899 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:24:47.899 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:47.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:24:47.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:24:47.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:47.899 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:47.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:24:47.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:47.899 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:47.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:24:47.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:47.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:47.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:47.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:47.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:24:47.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:24:47.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:47.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:24:47.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:47.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:24:47.899 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:24:47.899 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:24:47.899 19:21:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:24:47.899 19:21:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:47.899 19:21:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:47.899 19:21:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:24:47.899 19:21:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:47.899 19:21:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:47.899 19:21:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:24:47.899 19:21:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:24:48.466 19:21:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:24:48.466 19:21:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:24:48.466 19:21:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:24:48.466 19:21:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:48.466 19:21:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:48.466 19:21:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:24:48.466 19:21:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:48.466 19:21:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:48.466 19:21:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:24:48.466 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:24:48.466 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:48.466 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:24:48.466 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:24:48.466 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:24:48.466 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:24:48.466 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:48.466 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:24:48.466 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:24:48.466 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:24:48.466 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:24:48.466 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:24:48.466 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:24:48.466 ' 00:24:53.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:24:53.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:24:53.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:53.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:24:53.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:24:53.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:24:53.726 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:24:53.726 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:53.726 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:24:53.726 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:24:53.726 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:24:53.726 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:24:53.726 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:24:53.726 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:24:53.726 19:21:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:24:53.726 19:21:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:53.726 19:21:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:53.726 19:21:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2646219 00:24:53.726 19:21:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2646219 ']' 00:24:53.726 19:21:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2646219 00:24:53.726 19:21:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:24:53.726 19:21:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:53.726 19:21:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2646219 00:24:53.726 19:21:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:53.726 19:21:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:53.726 19:21:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2646219' 00:24:53.726 killing process with pid 2646219 00:24:53.726 19:21:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 2646219 00:24:53.726 19:21:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 2646219 00:24:53.985 19:21:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:24:53.985 19:21:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:24:53.985 19:21:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2646219 ']' 00:24:53.985 19:21:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2646219 00:24:53.985 19:21:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2646219 ']' 00:24:53.985 19:21:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2646219 00:24:53.985 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2646219) - No such process 00:24:53.985 19:21:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 2646219 is not found' 00:24:53.985 Process with pid 2646219 is not found 00:24:53.985 19:21:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:24:53.985 19:21:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:24:53.985 19:21:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:24:53.985 00:24:53.985 real 0m16.230s 00:24:53.985 user 0m34.526s 00:24:53.985 sys 0m0.830s 00:24:53.985 19:21:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:53.985 19:21:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:53.985 ************************************ 00:24:53.985 END TEST spdkcli_nvmf_tcp 00:24:53.985 ************************************ 00:24:53.985 19:21:59 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:24:53.985 19:21:59 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:53.985 19:21:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:53.985 19:21:59 -- common/autotest_common.sh@10 -- # set +x 00:24:53.985 ************************************ 00:24:53.985 START TEST nvmf_identify_passthru 00:24:53.985 ************************************ 00:24:53.985 19:21:59 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:24:53.985 * Looking for test storage... 00:24:53.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:53.985 19:21:59 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:54.243 19:21:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:24:54.243 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.243 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.243 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.243 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.243 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.243 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.243 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.243 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.243 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.243 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.243 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:24:54.243 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:24:54.243 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.243 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.243 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:54.243 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.243 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:54.243 19:22:00 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.243 19:22:00 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.243 19:22:00 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.244 19:22:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.244 19:22:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.244 19:22:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.244 19:22:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:24:54.244 19:22:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.244 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:24:54.244 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:54.244 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:54.244 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.244 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.244 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.244 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:54.244 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:54.244 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:54.244 19:22:00 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:54.244 19:22:00 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.244 19:22:00 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.244 19:22:00 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.244 19:22:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.244 19:22:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.244 19:22:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.244 19:22:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:24:54.244 19:22:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.244 19:22:00 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:24:54.244 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:54.244 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.244 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:54.244 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:54.244 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:54.244 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.244 19:22:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:54.244 19:22:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.244 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:54.244 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:54.244 19:22:00 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:24:54.244 19:22:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.147 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:24:56.148 Found 0000:08:00.0 (0x8086 - 0x159b) 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:24:56.148 Found 0000:08:00.1 (0x8086 - 0x159b) 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:24:56.148 Found net devices under 0000:08:00.0: cvl_0_0 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:24:56.148 Found net devices under 0000:08:00.1: cvl_0_1 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:56.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:24:56.148 00:24:56.148 --- 10.0.0.2 ping statistics --- 00:24:56.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.148 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:24:56.148 00:24:56.148 --- 10.0.0.1 ping statistics --- 00:24:56.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.148 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:56.148 19:22:01 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:56.148 19:22:01 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:24:56.148 19:22:01 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:56.148 19:22:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:56.148 19:22:01 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:24:56.148 19:22:01 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:24:56.148 19:22:01 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:24:56.148 19:22:01 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:24:56.148 19:22:01 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:24:56.148 19:22:01 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:24:56.148 19:22:01 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:24:56.148 19:22:01 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:56.148 19:22:01 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:56.148 19:22:01 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:24:56.148 19:22:01 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:24:56.148 19:22:01 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:84:00.0 00:24:56.148 19:22:01 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:84:00.0 00:24:56.148 19:22:01 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:84:00.0 00:24:56.148 19:22:01 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:84:00.0 ']' 00:24:56.148 19:22:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:24:56.148 19:22:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:24:56.148 19:22:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:24:56.148 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.336 19:22:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ8275016S1P0FGN 00:25:00.336 19:22:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:25:00.336 19:22:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:00.336 19:22:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:00.336 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.524 19:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:04.524 19:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:04.524 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:04.524 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:04.524 19:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:04.524 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:04.524 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:04.524 19:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2649771 00:25:04.524 19:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:04.524 19:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:04.524 19:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2649771 00:25:04.524 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 2649771 ']' 00:25:04.524 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.524 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:04.524 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.524 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:04.524 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:04.524 [2024-07-24 19:22:10.359593] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:25:04.524 [2024-07-24 19:22:10.359691] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:04.524 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.524 [2024-07-24 19:22:10.428626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:04.783 [2024-07-24 19:22:10.549300] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:04.783 [2024-07-24 19:22:10.549358] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:04.783 [2024-07-24 19:22:10.549374] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:04.783 [2024-07-24 19:22:10.549387] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:04.784 [2024-07-24 19:22:10.549399] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:04.784 [2024-07-24 19:22:10.549458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.784 [2024-07-24 19:22:10.549534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:04.784 [2024-07-24 19:22:10.549502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:04.784 [2024-07-24 19:22:10.549574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.784 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:04.784 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:25:04.784 19:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:04.784 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.784 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:04.784 INFO: Log level set to 20 00:25:04.784 INFO: Requests: 00:25:04.784 { 00:25:04.784 "jsonrpc": "2.0", 00:25:04.784 "method": "nvmf_set_config", 00:25:04.784 "id": 1, 00:25:04.784 "params": { 00:25:04.784 "admin_cmd_passthru": { 00:25:04.784 "identify_ctrlr": true 00:25:04.784 } 00:25:04.784 } 00:25:04.784 } 00:25:04.784 00:25:04.784 INFO: response: 00:25:04.784 { 00:25:04.784 "jsonrpc": "2.0", 00:25:04.784 "id": 1, 00:25:04.784 "result": true 00:25:04.784 } 00:25:04.784 00:25:04.784 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.784 19:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:04.784 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.784 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:04.784 INFO: Setting log level to 20 00:25:04.784 INFO: Setting log level to 20 00:25:04.784 INFO: Log level set to 20 00:25:04.784 INFO: Log level set to 20 00:25:04.784 INFO: Requests: 00:25:04.784 { 00:25:04.784 "jsonrpc": "2.0", 00:25:04.784 "method": "framework_start_init", 00:25:04.784 "id": 1 00:25:04.784 } 00:25:04.784 00:25:04.784 INFO: Requests: 00:25:04.784 { 00:25:04.784 "jsonrpc": "2.0", 00:25:04.784 "method": "framework_start_init", 00:25:04.784 "id": 1 00:25:04.784 } 00:25:04.784 00:25:04.784 [2024-07-24 19:22:10.739617] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:04.784 INFO: response: 00:25:04.784 { 00:25:04.784 "jsonrpc": "2.0", 00:25:04.784 "id": 1, 00:25:04.784 "result": true 00:25:04.784 } 00:25:04.784 00:25:04.784 INFO: response: 00:25:04.784 { 00:25:04.784 "jsonrpc": "2.0", 00:25:04.784 "id": 1, 00:25:04.784 "result": true 00:25:04.784 } 00:25:04.784 00:25:04.784 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.784 19:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:04.784 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.784 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:04.784 INFO: Setting log level to 40 00:25:04.784 INFO: Setting log level to 40 00:25:04.784 INFO: Setting log level to 40 00:25:04.784 [2024-07-24 19:22:10.749612] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.784 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.784 19:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:04.784 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:04.784 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:04.784 19:22:10 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:84:00.0 00:25:04.784 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.784 19:22:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:08.071 Nvme0n1 00:25:08.071 19:22:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.071 19:22:13 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:08.071 19:22:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.071 19:22:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:08.071 19:22:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.071 19:22:13 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:08.071 19:22:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.071 19:22:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:08.071 19:22:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.071 19:22:13 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:08.072 19:22:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.072 19:22:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:08.072 [2024-07-24 19:22:13.623363] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.072 19:22:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.072 19:22:13 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:08.072 19:22:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.072 19:22:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:08.072 [ 00:25:08.072 { 00:25:08.072 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:08.072 "subtype": "Discovery", 00:25:08.072 "listen_addresses": [], 00:25:08.072 "allow_any_host": true, 00:25:08.072 "hosts": [] 00:25:08.072 }, 00:25:08.072 { 00:25:08.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.072 "subtype": "NVMe", 00:25:08.072 "listen_addresses": [ 00:25:08.072 { 00:25:08.072 "trtype": "TCP", 00:25:08.072 "adrfam": "IPv4", 00:25:08.072 "traddr": "10.0.0.2", 00:25:08.072 "trsvcid": "4420" 00:25:08.072 } 00:25:08.072 ], 00:25:08.072 "allow_any_host": true, 00:25:08.072 "hosts": [], 00:25:08.072 "serial_number": "SPDK00000000000001", 00:25:08.072 "model_number": "SPDK bdev Controller", 00:25:08.072 "max_namespaces": 1, 00:25:08.072 "min_cntlid": 1, 00:25:08.072 "max_cntlid": 65519, 00:25:08.072 "namespaces": [ 00:25:08.072 { 00:25:08.072 "nsid": 1, 00:25:08.072 "bdev_name": "Nvme0n1", 00:25:08.072 "name": "Nvme0n1", 00:25:08.072 "nguid": "5CE317EAB73A4942BC1CDC381C469B74", 00:25:08.072 "uuid": "5ce317ea-b73a-4942-bc1c-dc381c469b74" 00:25:08.072 } 00:25:08.072 ] 00:25:08.072 } 00:25:08.072 ] 00:25:08.072 19:22:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.072 19:22:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:08.072 19:22:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:08.072 19:22:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:08.072 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.072 19:22:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ8275016S1P0FGN 00:25:08.072 19:22:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:08.072 19:22:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:08.072 19:22:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:08.072 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.330 19:22:14 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:08.330 19:22:14 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ8275016S1P0FGN '!=' PHLJ8275016S1P0FGN ']' 00:25:08.330 19:22:14 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:08.330 19:22:14 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:08.330 19:22:14 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.330 19:22:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:08.330 19:22:14 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.330 19:22:14 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:08.330 19:22:14 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:08.330 19:22:14 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:08.330 19:22:14 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:08.330 19:22:14 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:08.330 19:22:14 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:08.330 19:22:14 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:08.330 19:22:14 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:08.330 rmmod nvme_tcp 00:25:08.330 rmmod nvme_fabrics 00:25:08.330 rmmod nvme_keyring 00:25:08.330 19:22:14 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:08.330 19:22:14 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:08.330 19:22:14 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:08.330 19:22:14 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2649771 ']' 00:25:08.330 19:22:14 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2649771 00:25:08.330 19:22:14 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 2649771 ']' 00:25:08.330 19:22:14 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 2649771 00:25:08.330 19:22:14 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:25:08.330 19:22:14 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:08.330 19:22:14 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2649771 00:25:08.330 19:22:14 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:08.330 19:22:14 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:08.330 19:22:14 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2649771' 00:25:08.330 killing process with pid 2649771 00:25:08.330 19:22:14 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 2649771 00:25:08.330 19:22:14 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 2649771 00:25:10.238 19:22:15 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:10.238 19:22:15 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:10.238 19:22:15 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:10.238 19:22:15 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:10.238 19:22:15 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:10.238 19:22:15 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.238 19:22:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:10.238 19:22:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.148 19:22:17 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:12.148 00:25:12.148 real 0m17.883s 00:25:12.148 user 0m27.262s 00:25:12.148 sys 0m2.093s 00:25:12.148 19:22:17 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:12.148 19:22:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:12.148 ************************************ 00:25:12.148 END TEST nvmf_identify_passthru 00:25:12.148 ************************************ 00:25:12.148 19:22:17 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:12.148 19:22:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:12.148 19:22:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:12.148 19:22:17 -- common/autotest_common.sh@10 -- # set +x 00:25:12.148 ************************************ 00:25:12.148 START TEST nvmf_dif 00:25:12.148 ************************************ 00:25:12.148 19:22:17 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:12.148 * Looking for test storage... 00:25:12.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:12.148 19:22:17 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:12.148 19:22:17 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:12.148 19:22:17 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.148 19:22:17 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.148 19:22:17 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.148 19:22:17 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.148 19:22:17 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.148 19:22:17 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.148 19:22:17 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.148 19:22:17 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.148 19:22:17 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.148 19:22:17 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.148 19:22:17 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:12.149 19:22:17 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.149 19:22:17 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.149 19:22:17 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.149 19:22:17 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.149 19:22:17 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.149 19:22:17 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.149 19:22:17 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:12.149 19:22:17 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:12.149 19:22:17 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:12.149 19:22:17 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:12.149 19:22:17 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:12.149 19:22:17 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:12.149 19:22:17 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.149 19:22:17 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:12.149 19:22:17 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:12.149 19:22:17 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:12.149 19:22:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.058 19:22:19 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:25:14.059 Found 0000:08:00.0 (0x8086 - 0x159b) 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:25:14.059 Found 0000:08:00.1 (0x8086 - 0x159b) 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:25:14.059 Found net devices under 0000:08:00.0: cvl_0_0 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:25:14.059 Found net devices under 0000:08:00.1: cvl_0_1 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:14.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:25:14.059 00:25:14.059 --- 10.0.0.2 ping statistics --- 00:25:14.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.059 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:25:14.059 00:25:14.059 --- 10.0.0.1 ping statistics --- 00:25:14.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.059 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:14.059 19:22:19 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:14.630 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:25:14.630 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:14.630 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:25:14.630 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:25:14.630 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:25:14.630 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:25:14.630 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:25:14.630 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:25:14.630 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:25:14.630 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:25:14.630 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:25:14.630 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:25:14.630 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:25:14.630 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:25:14.630 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:25:14.630 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:25:14.630 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:25:14.890 19:22:20 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.890 19:22:20 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:14.890 19:22:20 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:14.890 19:22:20 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.890 19:22:20 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:14.890 19:22:20 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:14.890 19:22:20 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:14.890 19:22:20 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:14.890 19:22:20 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:14.890 19:22:20 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:14.890 19:22:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:14.890 19:22:20 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2652263 00:25:14.890 19:22:20 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:14.890 19:22:20 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2652263 00:25:14.890 19:22:20 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 2652263 ']' 00:25:14.890 19:22:20 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.890 19:22:20 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:14.890 19:22:20 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.890 19:22:20 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:14.890 19:22:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:14.890 [2024-07-24 19:22:20.778031] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:25:14.890 [2024-07-24 19:22:20.778126] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.890 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.890 [2024-07-24 19:22:20.858325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.149 [2024-07-24 19:22:21.012370] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.149 [2024-07-24 19:22:21.012446] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.149 [2024-07-24 19:22:21.012489] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.149 [2024-07-24 19:22:21.012518] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.149 [2024-07-24 19:22:21.012541] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.149 [2024-07-24 19:22:21.012588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.149 19:22:21 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:15.149 19:22:21 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:25:15.149 19:22:21 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:15.149 19:22:21 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:15.149 19:22:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:15.149 19:22:21 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.149 19:22:21 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:15.149 19:22:21 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:15.149 19:22:21 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.149 19:22:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:15.149 [2024-07-24 19:22:21.161690] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.408 19:22:21 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.408 19:22:21 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:15.408 19:22:21 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:15.408 19:22:21 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:15.408 19:22:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:15.408 ************************************ 00:25:15.408 START TEST fio_dif_1_default 00:25:15.408 ************************************ 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:15.408 bdev_null0 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:15.408 [2024-07-24 19:22:21.217968] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:15.408 { 00:25:15.408 "params": { 00:25:15.408 "name": "Nvme$subsystem", 00:25:15.408 "trtype": "$TEST_TRANSPORT", 00:25:15.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.408 "adrfam": "ipv4", 00:25:15.408 "trsvcid": "$NVMF_PORT", 00:25:15.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.408 "hdgst": ${hdgst:-false}, 00:25:15.408 "ddgst": ${ddgst:-false} 00:25:15.408 }, 00:25:15.408 "method": "bdev_nvme_attach_controller" 00:25:15.408 } 00:25:15.408 EOF 00:25:15.408 )") 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:15.408 "params": { 00:25:15.408 "name": "Nvme0", 00:25:15.408 "trtype": "tcp", 00:25:15.408 "traddr": "10.0.0.2", 00:25:15.408 "adrfam": "ipv4", 00:25:15.408 "trsvcid": "4420", 00:25:15.408 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:15.408 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:15.408 "hdgst": false, 00:25:15.408 "ddgst": false 00:25:15.408 }, 00:25:15.408 "method": "bdev_nvme_attach_controller" 00:25:15.408 }' 00:25:15.408 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:15.409 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:15.409 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:15.409 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:15.409 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:15.409 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:15.409 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:15.409 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:15.409 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:15.409 19:22:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:15.669 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:15.669 fio-3.35 00:25:15.669 Starting 1 thread 00:25:15.669 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.868 00:25:27.868 filename0: (groupid=0, jobs=1): err= 0: pid=2652479: Wed Jul 24 19:22:32 2024 00:25:27.868 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10014msec) 00:25:27.868 slat (nsec): min=7396, max=66076, avg=9331.16, stdev=3791.32 00:25:27.868 clat (usec): min=40898, max=46401, avg=41009.75, stdev=360.24 00:25:27.868 lat (usec): min=40907, max=46440, avg=41019.09, stdev=360.97 00:25:27.868 clat percentiles (usec): 00:25:27.868 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:25:27.868 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:25:27.868 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:27.868 | 99.00th=[41681], 99.50th=[41681], 99.90th=[46400], 99.95th=[46400], 00:25:27.868 | 99.99th=[46400] 00:25:27.868 bw ( KiB/s): min= 384, max= 416, per=99.52%, avg=388.80, stdev=11.72, samples=20 00:25:27.868 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:25:27.868 lat (msec) : 50=100.00% 00:25:27.868 cpu : usr=90.03%, sys=9.65%, ctx=19, majf=0, minf=274 00:25:27.868 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:27.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.868 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.868 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:27.868 00:25:27.868 Run status group 0 (all jobs): 00:25:27.868 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10014-10014msec 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.868 00:25:27.868 real 0m11.089s 00:25:27.868 user 0m9.959s 00:25:27.868 sys 0m1.195s 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:27.868 ************************************ 00:25:27.868 END TEST fio_dif_1_default 00:25:27.868 ************************************ 00:25:27.868 19:22:32 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:27.868 19:22:32 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:27.868 19:22:32 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:27.868 19:22:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:27.868 ************************************ 00:25:27.868 START TEST fio_dif_1_multi_subsystems 00:25:27.868 ************************************ 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:27.868 bdev_null0 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.868 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:27.869 [2024-07-24 19:22:32.359610] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:27.869 bdev_null1 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:27.869 { 00:25:27.869 "params": { 00:25:27.869 "name": "Nvme$subsystem", 00:25:27.869 "trtype": "$TEST_TRANSPORT", 00:25:27.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.869 "adrfam": "ipv4", 00:25:27.869 "trsvcid": "$NVMF_PORT", 00:25:27.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.869 "hdgst": ${hdgst:-false}, 00:25:27.869 "ddgst": ${ddgst:-false} 00:25:27.869 }, 00:25:27.869 "method": "bdev_nvme_attach_controller" 00:25:27.869 } 00:25:27.869 EOF 00:25:27.869 )") 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:27.869 { 00:25:27.869 "params": { 00:25:27.869 "name": "Nvme$subsystem", 00:25:27.869 "trtype": "$TEST_TRANSPORT", 00:25:27.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.869 "adrfam": "ipv4", 00:25:27.869 "trsvcid": "$NVMF_PORT", 00:25:27.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.869 "hdgst": ${hdgst:-false}, 00:25:27.869 "ddgst": ${ddgst:-false} 00:25:27.869 }, 00:25:27.869 "method": "bdev_nvme_attach_controller" 00:25:27.869 } 00:25:27.869 EOF 00:25:27.869 )") 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:27.869 "params": { 00:25:27.869 "name": "Nvme0", 00:25:27.869 "trtype": "tcp", 00:25:27.869 "traddr": "10.0.0.2", 00:25:27.869 "adrfam": "ipv4", 00:25:27.869 "trsvcid": "4420", 00:25:27.869 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:27.869 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:27.869 "hdgst": false, 00:25:27.869 "ddgst": false 00:25:27.869 }, 00:25:27.869 "method": "bdev_nvme_attach_controller" 00:25:27.869 },{ 00:25:27.869 "params": { 00:25:27.869 "name": "Nvme1", 00:25:27.869 "trtype": "tcp", 00:25:27.869 "traddr": "10.0.0.2", 00:25:27.869 "adrfam": "ipv4", 00:25:27.869 "trsvcid": "4420", 00:25:27.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:27.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:27.869 "hdgst": false, 00:25:27.869 "ddgst": false 00:25:27.869 }, 00:25:27.869 "method": "bdev_nvme_attach_controller" 00:25:27.869 }' 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:27.869 19:22:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:27.869 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:27.869 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:27.869 fio-3.35 00:25:27.869 Starting 2 threads 00:25:27.869 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.835 00:25:37.835 filename0: (groupid=0, jobs=1): err= 0: pid=2653554: Wed Jul 24 19:22:43 2024 00:25:37.835 read: IOPS=186, BW=746KiB/s (764kB/s)(7488KiB/10034msec) 00:25:37.835 slat (nsec): min=7370, max=50089, avg=10715.82, stdev=5117.67 00:25:37.835 clat (usec): min=633, max=43681, avg=21406.30, stdev=20576.95 00:25:37.835 lat (usec): min=641, max=43716, avg=21417.01, stdev=20575.81 00:25:37.835 clat percentiles (usec): 00:25:37.835 | 1.00th=[ 644], 5.00th=[ 668], 10.00th=[ 676], 20.00th=[ 693], 00:25:37.835 | 30.00th=[ 709], 40.00th=[ 816], 50.00th=[40633], 60.00th=[41157], 00:25:37.835 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:25:37.835 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:25:37.835 | 99.99th=[43779] 00:25:37.835 bw ( KiB/s): min= 672, max= 768, per=56.79%, avg=747.20, stdev=33.28, samples=20 00:25:37.835 iops : min= 168, max= 192, avg=186.80, stdev= 8.32, samples=20 00:25:37.835 lat (usec) : 750=37.18%, 1000=9.88% 00:25:37.835 lat (msec) : 2=2.72%, 50=50.21% 00:25:37.835 cpu : usr=97.31%, sys=2.41%, ctx=17, majf=0, minf=130 00:25:37.835 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:37.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.835 issued rwts: total=1872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.835 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:37.835 filename1: (groupid=0, jobs=1): err= 0: pid=2653555: Wed Jul 24 19:22:43 2024 00:25:37.835 read: IOPS=142, BW=569KiB/s (583kB/s)(5712KiB/10036msec) 00:25:37.835 slat (usec): min=5, max=107, avg=13.17, stdev= 5.78 00:25:37.835 clat (usec): min=676, max=45519, avg=28069.30, stdev=18920.07 00:25:37.835 lat (usec): min=687, max=45550, avg=28082.47, stdev=18919.59 00:25:37.835 clat percentiles (usec): 00:25:37.835 | 1.00th=[ 701], 5.00th=[ 742], 10.00th=[ 758], 20.00th=[ 783], 00:25:37.835 | 30.00th=[ 1074], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:25:37.835 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:25:37.835 | 99.00th=[42206], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:25:37.835 | 99.99th=[45351] 00:25:37.835 bw ( KiB/s): min= 384, max= 768, per=43.26%, avg=569.60, stdev=180.30, samples=20 00:25:37.836 iops : min= 96, max= 192, avg=142.40, stdev=45.08, samples=20 00:25:37.836 lat (usec) : 750=6.93%, 1000=21.01% 00:25:37.836 lat (msec) : 2=4.55%, 50=67.51% 00:25:37.836 cpu : usr=96.25%, sys=2.88%, ctx=56, majf=0, minf=239 00:25:37.836 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:37.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.836 issued rwts: total=1428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.836 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:37.836 00:25:37.836 Run status group 0 (all jobs): 00:25:37.836 READ: bw=1315KiB/s (1347kB/s), 569KiB/s-746KiB/s (583kB/s-764kB/s), io=12.9MiB (13.5MB), run=10034-10036msec 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.836 00:25:37.836 real 0m11.444s 00:25:37.836 user 0m20.603s 00:25:37.836 sys 0m0.838s 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:37.836 19:22:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:37.836 ************************************ 00:25:37.836 END TEST fio_dif_1_multi_subsystems 00:25:37.836 ************************************ 00:25:37.836 19:22:43 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:37.836 19:22:43 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:37.836 19:22:43 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:37.836 19:22:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:37.836 ************************************ 00:25:37.836 START TEST fio_dif_rand_params 00:25:37.836 ************************************ 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:37.836 bdev_null0 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.836 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.098 [2024-07-24 19:22:43.858033] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:38.098 { 00:25:38.098 "params": { 00:25:38.098 "name": "Nvme$subsystem", 00:25:38.098 "trtype": "$TEST_TRANSPORT", 00:25:38.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.098 "adrfam": "ipv4", 00:25:38.098 "trsvcid": "$NVMF_PORT", 00:25:38.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.098 "hdgst": ${hdgst:-false}, 00:25:38.098 "ddgst": ${ddgst:-false} 00:25:38.098 }, 00:25:38.098 "method": "bdev_nvme_attach_controller" 00:25:38.098 } 00:25:38.098 EOF 00:25:38.098 )") 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:38.098 "params": { 00:25:38.098 "name": "Nvme0", 00:25:38.098 "trtype": "tcp", 00:25:38.098 "traddr": "10.0.0.2", 00:25:38.098 "adrfam": "ipv4", 00:25:38.098 "trsvcid": "4420", 00:25:38.098 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:38.098 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:38.098 "hdgst": false, 00:25:38.098 "ddgst": false 00:25:38.098 }, 00:25:38.098 "method": "bdev_nvme_attach_controller" 00:25:38.098 }' 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:38.098 19:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:38.356 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:38.356 ... 00:25:38.356 fio-3.35 00:25:38.356 Starting 3 threads 00:25:38.356 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.977 00:25:44.977 filename0: (groupid=0, jobs=1): err= 0: pid=2654622: Wed Jul 24 19:22:49 2024 00:25:44.977 read: IOPS=187, BW=23.5MiB/s (24.6MB/s)(118MiB/5045msec) 00:25:44.977 slat (nsec): min=5133, max=81770, avg=21998.42, stdev=5824.96 00:25:44.977 clat (usec): min=5468, max=93181, avg=15909.99, stdev=14235.67 00:25:44.977 lat (usec): min=5483, max=93209, avg=15931.99, stdev=14235.45 00:25:44.977 clat percentiles (usec): 00:25:44.977 | 1.00th=[ 6128], 5.00th=[ 7832], 10.00th=[ 8455], 20.00th=[ 9110], 00:25:44.977 | 30.00th=[10028], 40.00th=[11469], 50.00th=[12256], 60.00th=[12649], 00:25:44.977 | 70.00th=[13173], 80.00th=[13829], 90.00th=[46924], 95.00th=[51643], 00:25:44.977 | 99.00th=[88605], 99.50th=[90702], 99.90th=[92799], 99.95th=[92799], 00:25:44.977 | 99.99th=[92799] 00:25:44.977 bw ( KiB/s): min=12288, max=32256, per=32.70%, avg=24166.40, stdev=6078.70, samples=10 00:25:44.977 iops : min= 96, max= 252, avg=188.80, stdev=47.49, samples=10 00:25:44.977 lat (msec) : 10=30.10%, 20=59.13%, 50=3.38%, 100=7.39% 00:25:44.977 cpu : usr=93.97%, sys=4.70%, ctx=168, majf=0, minf=188 00:25:44.977 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:44.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.977 issued rwts: total=947,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.977 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:44.977 filename0: (groupid=0, jobs=1): err= 0: pid=2654623: Wed Jul 24 19:22:49 2024 00:25:44.977 read: IOPS=200, BW=25.1MiB/s (26.3MB/s)(126MiB/5005msec) 00:25:44.977 slat (nsec): min=5700, max=87175, avg=14953.93, stdev=5087.49 00:25:44.977 clat (usec): min=5390, max=89851, avg=14933.71, stdev=11148.69 00:25:44.977 lat (usec): min=5401, max=89871, avg=14948.66, stdev=11148.89 00:25:44.977 clat percentiles (usec): 00:25:44.977 | 1.00th=[ 5800], 5.00th=[ 6259], 10.00th=[ 6587], 20.00th=[ 8717], 00:25:44.977 | 30.00th=[ 9765], 40.00th=[11076], 50.00th=[12387], 60.00th=[13960], 00:25:44.977 | 70.00th=[15139], 80.00th=[16909], 90.00th=[19006], 95.00th=[48497], 00:25:44.977 | 99.00th=[55313], 99.50th=[56886], 99.90th=[86508], 99.95th=[89654], 00:25:44.977 | 99.99th=[89654] 00:25:44.977 bw ( KiB/s): min=18944, max=29440, per=34.68%, avg=25629.80, stdev=3479.87, samples=10 00:25:44.977 iops : min= 148, max= 230, avg=200.20, stdev=27.23, samples=10 00:25:44.977 lat (msec) : 10=31.77%, 20=60.56%, 50=2.99%, 100=4.68% 00:25:44.977 cpu : usr=95.08%, sys=4.54%, ctx=9, majf=0, minf=187 00:25:44.977 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:44.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.977 issued rwts: total=1004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.977 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:44.977 filename0: (groupid=0, jobs=1): err= 0: pid=2654624: Wed Jul 24 19:22:49 2024 00:25:44.977 read: IOPS=190, BW=23.8MiB/s (25.0MB/s)(120MiB/5045msec) 00:25:44.977 slat (nsec): min=5957, max=44336, avg=14613.23, stdev=4983.30 00:25:44.977 clat (usec): min=5430, max=95118, avg=15625.05, stdev=12019.84 00:25:44.977 lat (usec): min=5442, max=95130, avg=15639.66, stdev=12019.73 00:25:44.977 clat percentiles (usec): 00:25:44.977 | 1.00th=[ 5866], 5.00th=[ 6390], 10.00th=[ 7701], 20.00th=[ 9503], 00:25:44.977 | 30.00th=[10683], 40.00th=[11863], 50.00th=[12649], 60.00th=[13173], 00:25:44.977 | 70.00th=[13829], 80.00th=[15008], 90.00th=[20055], 95.00th=[50594], 00:25:44.977 | 99.00th=[55313], 99.50th=[56361], 99.90th=[94897], 99.95th=[94897], 00:25:44.977 | 99.99th=[94897] 00:25:44.977 bw ( KiB/s): min=19456, max=30208, per=33.22%, avg=24550.40, stdev=4588.13, samples=10 00:25:44.977 iops : min= 152, max= 236, avg=191.80, stdev=35.84, samples=10 00:25:44.977 lat (msec) : 10=24.01%, 20=65.90%, 50=4.57%, 100=5.51% 00:25:44.977 cpu : usr=95.00%, sys=4.62%, ctx=12, majf=0, minf=73 00:25:44.977 IO depths : 1=2.6%, 2=97.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:44.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.977 issued rwts: total=962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.977 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:44.977 00:25:44.977 Run status group 0 (all jobs): 00:25:44.977 READ: bw=72.2MiB/s (75.7MB/s), 23.5MiB/s-25.1MiB/s (24.6MB/s-26.3MB/s), io=364MiB (382MB), run=5005-5045msec 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.977 bdev_null0 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.977 [2024-07-24 19:22:49.933738] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:44.977 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.978 bdev_null1 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.978 bdev_null2 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.978 19:22:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:44.978 { 00:25:44.978 "params": { 00:25:44.978 "name": "Nvme$subsystem", 00:25:44.978 "trtype": "$TEST_TRANSPORT", 00:25:44.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:44.978 "adrfam": "ipv4", 00:25:44.978 "trsvcid": "$NVMF_PORT", 00:25:44.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:44.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:44.978 "hdgst": ${hdgst:-false}, 00:25:44.978 "ddgst": ${ddgst:-false} 00:25:44.978 }, 00:25:44.978 "method": "bdev_nvme_attach_controller" 00:25:44.978 } 00:25:44.978 EOF 00:25:44.978 )") 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:44.978 { 00:25:44.978 "params": { 00:25:44.978 "name": "Nvme$subsystem", 00:25:44.978 "trtype": "$TEST_TRANSPORT", 00:25:44.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:44.978 "adrfam": "ipv4", 00:25:44.978 "trsvcid": "$NVMF_PORT", 00:25:44.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:44.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:44.978 "hdgst": ${hdgst:-false}, 00:25:44.978 "ddgst": ${ddgst:-false} 00:25:44.978 }, 00:25:44.978 "method": "bdev_nvme_attach_controller" 00:25:44.978 } 00:25:44.978 EOF 00:25:44.978 )") 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:44.978 { 00:25:44.978 "params": { 00:25:44.978 "name": "Nvme$subsystem", 00:25:44.978 "trtype": "$TEST_TRANSPORT", 00:25:44.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:44.978 "adrfam": "ipv4", 00:25:44.978 "trsvcid": "$NVMF_PORT", 00:25:44.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:44.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:44.978 "hdgst": ${hdgst:-false}, 00:25:44.978 "ddgst": ${ddgst:-false} 00:25:44.978 }, 00:25:44.978 "method": "bdev_nvme_attach_controller" 00:25:44.978 } 00:25:44.978 EOF 00:25:44.978 )") 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:44.978 19:22:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:44.978 "params": { 00:25:44.978 "name": "Nvme0", 00:25:44.978 "trtype": "tcp", 00:25:44.978 "traddr": "10.0.0.2", 00:25:44.978 "adrfam": "ipv4", 00:25:44.978 "trsvcid": "4420", 00:25:44.978 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:44.978 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:44.978 "hdgst": false, 00:25:44.978 "ddgst": false 00:25:44.978 }, 00:25:44.978 "method": "bdev_nvme_attach_controller" 00:25:44.978 },{ 00:25:44.978 "params": { 00:25:44.978 "name": "Nvme1", 00:25:44.978 "trtype": "tcp", 00:25:44.978 "traddr": "10.0.0.2", 00:25:44.978 "adrfam": "ipv4", 00:25:44.978 "trsvcid": "4420", 00:25:44.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:44.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:44.978 "hdgst": false, 00:25:44.978 "ddgst": false 00:25:44.978 }, 00:25:44.978 "method": "bdev_nvme_attach_controller" 00:25:44.978 },{ 00:25:44.978 "params": { 00:25:44.978 "name": "Nvme2", 00:25:44.978 "trtype": "tcp", 00:25:44.978 "traddr": "10.0.0.2", 00:25:44.978 "adrfam": "ipv4", 00:25:44.978 "trsvcid": "4420", 00:25:44.978 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:44.978 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:44.978 "hdgst": false, 00:25:44.978 "ddgst": false 00:25:44.979 }, 00:25:44.979 "method": "bdev_nvme_attach_controller" 00:25:44.979 }' 00:25:44.979 19:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:44.979 19:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:44.979 19:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:44.979 19:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:44.979 19:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:44.979 19:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:44.979 19:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:44.979 19:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:44.979 19:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:44.979 19:22:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:44.979 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:44.979 ... 00:25:44.979 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:44.979 ... 00:25:44.979 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:44.979 ... 00:25:44.979 fio-3.35 00:25:44.979 Starting 24 threads 00:25:44.979 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.187 00:25:57.187 filename0: (groupid=0, jobs=1): err= 0: pid=2655275: Wed Jul 24 19:23:01 2024 00:25:57.187 read: IOPS=120, BW=483KiB/s (494kB/s)(4856KiB/10061msec) 00:25:57.187 slat (usec): min=7, max=142, avg=72.99, stdev=27.73 00:25:57.187 clat (msec): min=20, max=395, avg=131.98, stdev=123.90 00:25:57.187 lat (msec): min=20, max=395, avg=132.06, stdev=123.89 00:25:57.187 clat percentiles (msec): 00:25:57.187 | 1.00th=[ 22], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:25:57.187 | 30.00th=[ 27], 40.00th=[ 33], 50.00th=[ 38], 60.00th=[ 155], 00:25:57.187 | 70.00th=[ 241], 80.00th=[ 275], 90.00th=[ 313], 95.00th=[ 330], 00:25:57.187 | 99.00th=[ 384], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 397], 00:25:57.187 | 99.99th=[ 397] 00:25:57.187 bw ( KiB/s): min= 144, max= 2528, per=5.34%, avg=480.80, stdev=635.72, samples=20 00:25:57.187 iops : min= 36, max= 632, avg=120.20, stdev=158.93, samples=20 00:25:57.187 lat (msec) : 50=53.46%, 100=3.54%, 250=19.44%, 500=23.56% 00:25:57.187 cpu : usr=98.54%, sys=1.03%, ctx=38, majf=0, minf=9 00:25:57.187 IO depths : 1=0.9%, 2=2.6%, 4=10.9%, 8=73.9%, 16=11.8%, 32=0.0%, >=64=0.0% 00:25:57.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.187 complete : 0=0.0%, 4=90.1%, 8=4.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.187 issued rwts: total=1214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.187 filename0: (groupid=0, jobs=1): err= 0: pid=2655276: Wed Jul 24 19:23:01 2024 00:25:57.187 read: IOPS=87, BW=351KiB/s (360kB/s)(3520KiB/10018msec) 00:25:57.187 slat (usec): min=11, max=158, avg=91.70, stdev=17.55 00:25:57.187 clat (msec): min=29, max=550, avg=181.36, stdev=179.61 00:25:57.187 lat (msec): min=29, max=550, avg=181.46, stdev=179.61 00:25:57.187 clat percentiles (msec): 00:25:57.187 | 1.00th=[ 34], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 36], 00:25:57.187 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 71], 00:25:57.187 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 414], 95.00th=[ 451], 00:25:57.187 | 99.00th=[ 498], 99.50th=[ 527], 99.90th=[ 550], 99.95th=[ 550], 00:25:57.187 | 99.99th=[ 550] 00:25:57.187 bw ( KiB/s): min= 112, max= 1792, per=3.84%, avg=345.60, stdev=494.69, samples=20 00:25:57.187 iops : min= 28, max= 448, avg=86.40, stdev=123.67, samples=20 00:25:57.187 lat (msec) : 50=58.18%, 100=1.82%, 250=0.23%, 500=38.86%, 750=0.91% 00:25:57.187 cpu : usr=98.47%, sys=1.07%, ctx=47, majf=0, minf=9 00:25:57.187 IO depths : 1=4.9%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:25:57.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.187 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.187 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.187 filename0: (groupid=0, jobs=1): err= 0: pid=2655277: Wed Jul 24 19:23:01 2024 00:25:57.187 read: IOPS=87, BW=351KiB/s (359kB/s)(3520KiB/10041msec) 00:25:57.188 slat (usec): min=20, max=165, avg=104.79, stdev=21.11 00:25:57.188 clat (msec): min=29, max=795, avg=181.65, stdev=192.75 00:25:57.188 lat (msec): min=29, max=795, avg=181.76, stdev=192.75 00:25:57.188 clat percentiles (msec): 00:25:57.188 | 1.00th=[ 34], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 36], 00:25:57.188 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 52], 00:25:57.188 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 414], 95.00th=[ 447], 00:25:57.188 | 99.00th=[ 793], 99.50th=[ 793], 99.90th=[ 793], 99.95th=[ 793], 00:25:57.188 | 99.99th=[ 793] 00:25:57.188 bw ( KiB/s): min= 127, max= 1792, per=3.24%, avg=291.50, stdev=415.57, samples=18 00:25:57.188 iops : min= 31, max= 448, avg=72.83, stdev=103.91, samples=18 00:25:57.188 lat (msec) : 50=58.41%, 100=3.41%, 500=35.68%, 750=0.68%, 1000=1.82% 00:25:57.188 cpu : usr=97.34%, sys=1.57%, ctx=110, majf=0, minf=9 00:25:57.188 IO depths : 1=5.1%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:25:57.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.188 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.188 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.188 filename0: (groupid=0, jobs=1): err= 0: pid=2655278: Wed Jul 24 19:23:01 2024 00:25:57.188 read: IOPS=87, BW=352KiB/s (360kB/s)(3520KiB/10013msec) 00:25:57.188 slat (nsec): min=4988, max=66819, avg=16747.79, stdev=7295.13 00:25:57.188 clat (msec): min=34, max=683, avg=181.89, stdev=182.91 00:25:57.188 lat (msec): min=34, max=683, avg=181.90, stdev=182.91 00:25:57.188 clat percentiles (msec): 00:25:57.188 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:25:57.188 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 38], 60.00th=[ 64], 00:25:57.188 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 405], 95.00th=[ 426], 00:25:57.188 | 99.00th=[ 684], 99.50th=[ 684], 99.90th=[ 684], 99.95th=[ 684], 00:25:57.188 | 99.99th=[ 684] 00:25:57.188 bw ( KiB/s): min= 128, max= 1792, per=4.04%, avg=363.42, stdev=502.51, samples=19 00:25:57.188 iops : min= 32, max= 448, avg=90.84, stdev=125.59, samples=19 00:25:57.188 lat (msec) : 50=58.18%, 100=1.82%, 250=1.82%, 500=36.36%, 750=1.82% 00:25:57.188 cpu : usr=96.73%, sys=2.03%, ctx=230, majf=0, minf=9 00:25:57.188 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:57.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.188 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.188 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.188 filename0: (groupid=0, jobs=1): err= 0: pid=2655279: Wed Jul 24 19:23:01 2024 00:25:57.188 read: IOPS=103, BW=413KiB/s (423kB/s)(4144KiB/10041msec) 00:25:57.188 slat (nsec): min=8702, max=60786, avg=13331.45, stdev=5941.86 00:25:57.188 clat (msec): min=20, max=489, avg=154.95, stdev=128.59 00:25:57.188 lat (msec): min=20, max=489, avg=154.96, stdev=128.59 00:25:57.188 clat percentiles (msec): 00:25:57.188 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:25:57.188 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 63], 60.00th=[ 239], 00:25:57.188 | 70.00th=[ 245], 80.00th=[ 288], 90.00th=[ 313], 95.00th=[ 338], 00:25:57.188 | 99.00th=[ 489], 99.50th=[ 489], 99.90th=[ 489], 99.95th=[ 489], 00:25:57.188 | 99.99th=[ 489] 00:25:57.188 bw ( KiB/s): min= 128, max= 1792, per=4.54%, avg=408.00, stdev=478.44, samples=20 00:25:57.188 iops : min= 32, max= 448, avg=102.00, stdev=119.61, samples=20 00:25:57.188 lat (msec) : 50=49.42%, 100=2.12%, 250=24.13%, 500=24.32% 00:25:57.188 cpu : usr=98.17%, sys=1.34%, ctx=27, majf=0, minf=9 00:25:57.188 IO depths : 1=3.4%, 2=7.1%, 4=17.5%, 8=62.8%, 16=9.2%, 32=0.0%, >=64=0.0% 00:25:57.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.188 complete : 0=0.0%, 4=92.0%, 8=2.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.188 issued rwts: total=1036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.188 filename0: (groupid=0, jobs=1): err= 0: pid=2655280: Wed Jul 24 19:23:01 2024 00:25:57.188 read: IOPS=87, BW=351KiB/s (359kB/s)(3520KiB/10040msec) 00:25:57.188 slat (nsec): min=8804, max=67535, avg=15914.75, stdev=6162.06 00:25:57.188 clat (msec): min=27, max=686, avg=182.04, stdev=182.97 00:25:57.188 lat (msec): min=27, max=686, avg=182.06, stdev=182.97 00:25:57.188 clat percentiles (msec): 00:25:57.188 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:25:57.188 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 38], 60.00th=[ 64], 00:25:57.188 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 405], 95.00th=[ 426], 00:25:57.188 | 99.00th=[ 684], 99.50th=[ 684], 99.90th=[ 684], 99.95th=[ 684], 00:25:57.188 | 99.99th=[ 684] 00:25:57.188 bw ( KiB/s): min= 128, max= 1792, per=4.04%, avg=363.79, stdev=503.31, samples=19 00:25:57.188 iops : min= 32, max= 448, avg=90.95, stdev=125.83, samples=19 00:25:57.188 lat (msec) : 50=57.95%, 100=2.05%, 250=1.59%, 500=36.59%, 750=1.82% 00:25:57.188 cpu : usr=98.00%, sys=1.22%, ctx=61, majf=0, minf=9 00:25:57.188 IO depths : 1=2.6%, 2=8.9%, 4=25.0%, 8=53.6%, 16=9.9%, 32=0.0%, >=64=0.0% 00:25:57.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.188 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.188 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.188 filename0: (groupid=0, jobs=1): err= 0: pid=2655281: Wed Jul 24 19:23:01 2024 00:25:57.188 read: IOPS=89, BW=358KiB/s (366kB/s)(3584KiB/10014msec) 00:25:57.188 slat (usec): min=5, max=202, avg=100.16, stdev=29.44 00:25:57.188 clat (msec): min=27, max=483, avg=177.96, stdev=173.78 00:25:57.188 lat (msec): min=27, max=483, avg=178.06, stdev=173.78 00:25:57.188 clat percentiles (msec): 00:25:57.188 | 1.00th=[ 34], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 36], 00:25:57.188 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 37], 60.00th=[ 144], 00:25:57.188 | 70.00th=[ 384], 80.00th=[ 393], 90.00th=[ 405], 95.00th=[ 422], 00:25:57.188 | 99.00th=[ 477], 99.50th=[ 481], 99.90th=[ 485], 99.95th=[ 485], 00:25:57.188 | 99.99th=[ 485] 00:25:57.188 bw ( KiB/s): min= 128, max= 1776, per=3.92%, avg=352.00, stdev=492.72, samples=20 00:25:57.188 iops : min= 32, max= 444, avg=88.00, stdev=123.18, samples=20 00:25:57.188 lat (msec) : 50=57.14%, 100=1.79%, 250=3.35%, 500=37.72% 00:25:57.188 cpu : usr=97.41%, sys=1.58%, ctx=107, majf=0, minf=9 00:25:57.188 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:25:57.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.188 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.188 issued rwts: total=896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.188 filename0: (groupid=0, jobs=1): err= 0: pid=2655282: Wed Jul 24 19:23:01 2024 00:25:57.188 read: IOPS=86, BW=345KiB/s (354kB/s)(3456KiB/10003msec) 00:25:57.188 slat (usec): min=4, max=135, avg=33.50, stdev=12.53 00:25:57.188 clat (msec): min=34, max=809, avg=184.94, stdev=193.73 00:25:57.188 lat (msec): min=34, max=809, avg=184.98, stdev=193.73 00:25:57.188 clat percentiles (msec): 00:25:57.188 | 1.00th=[ 35], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 36], 00:25:57.188 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 38], 60.00th=[ 64], 00:25:57.188 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 409], 95.00th=[ 426], 00:25:57.188 | 99.00th=[ 810], 99.50th=[ 810], 99.90th=[ 810], 99.95th=[ 810], 00:25:57.188 | 99.99th=[ 810] 00:25:57.188 bw ( KiB/s): min= 128, max= 1664, per=3.16%, avg=284.44, stdev=388.85, samples=18 00:25:57.188 iops : min= 32, max= 416, avg=71.11, stdev=97.21, samples=18 00:25:57.188 lat (msec) : 50=59.26%, 100=1.85%, 500=36.81%, 750=0.23%, 1000=1.85% 00:25:57.188 cpu : usr=98.26%, sys=1.12%, ctx=60, majf=0, minf=9 00:25:57.188 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:25:57.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.188 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.188 issued rwts: total=864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.188 filename1: (groupid=0, jobs=1): err= 0: pid=2655283: Wed Jul 24 19:23:01 2024 00:25:57.188 read: IOPS=87, BW=351KiB/s (360kB/s)(3520KiB/10018msec) 00:25:57.188 slat (usec): min=12, max=115, avg=33.51, stdev=11.18 00:25:57.188 clat (msec): min=29, max=551, avg=181.83, stdev=178.97 00:25:57.188 lat (msec): min=29, max=552, avg=181.86, stdev=178.97 00:25:57.188 clat percentiles (msec): 00:25:57.188 | 1.00th=[ 35], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 36], 00:25:57.188 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 38], 60.00th=[ 72], 00:25:57.188 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 409], 95.00th=[ 426], 00:25:57.188 | 99.00th=[ 502], 99.50th=[ 518], 99.90th=[ 550], 99.95th=[ 550], 00:25:57.188 | 99.99th=[ 550] 00:25:57.188 bw ( KiB/s): min= 112, max= 1792, per=3.84%, avg=345.60, stdev=494.88, samples=20 00:25:57.188 iops : min= 28, max= 448, avg=86.40, stdev=123.72, samples=20 00:25:57.188 lat (msec) : 50=58.18%, 100=1.82%, 500=39.09%, 750=0.91% 00:25:57.188 cpu : usr=97.45%, sys=1.58%, ctx=51, majf=0, minf=9 00:25:57.188 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:25:57.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.188 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.188 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.189 filename1: (groupid=0, jobs=1): err= 0: pid=2655284: Wed Jul 24 19:23:01 2024 00:25:57.189 read: IOPS=90, BW=363KiB/s (372kB/s)(3648KiB/10040msec) 00:25:57.189 slat (usec): min=12, max=147, avg=87.05, stdev=32.21 00:25:57.189 clat (msec): min=21, max=627, avg=175.42, stdev=169.89 00:25:57.189 lat (msec): min=21, max=627, avg=175.50, stdev=169.88 00:25:57.189 clat percentiles (msec): 00:25:57.189 | 1.00th=[ 34], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 36], 00:25:57.189 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 213], 00:25:57.189 | 70.00th=[ 342], 80.00th=[ 388], 90.00th=[ 397], 95.00th=[ 405], 00:25:57.189 | 99.00th=[ 575], 99.50th=[ 584], 99.90th=[ 625], 99.95th=[ 625], 00:25:57.189 | 99.99th=[ 625] 00:25:57.189 bw ( KiB/s): min= 128, max= 1792, per=3.99%, avg=358.40, stdev=491.00, samples=20 00:25:57.189 iops : min= 32, max= 448, avg=89.60, stdev=122.75, samples=20 00:25:57.189 lat (msec) : 50=55.92%, 100=1.97%, 250=4.82%, 500=35.75%, 750=1.54% 00:25:57.189 cpu : usr=97.91%, sys=1.36%, ctx=25, majf=0, minf=9 00:25:57.189 IO depths : 1=4.9%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:25:57.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.189 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.189 issued rwts: total=912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.189 filename1: (groupid=0, jobs=1): err= 0: pid=2655285: Wed Jul 24 19:23:01 2024 00:25:57.189 read: IOPS=90, BW=363KiB/s (371kB/s)(3648KiB/10059msec) 00:25:57.189 slat (usec): min=4, max=175, avg=98.53, stdev=23.72 00:25:57.189 clat (msec): min=33, max=589, avg=175.63, stdev=172.52 00:25:57.189 lat (msec): min=33, max=589, avg=175.73, stdev=172.52 00:25:57.189 clat percentiles (msec): 00:25:57.189 | 1.00th=[ 34], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 36], 00:25:57.189 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 194], 00:25:57.189 | 70.00th=[ 380], 80.00th=[ 388], 90.00th=[ 401], 95.00th=[ 426], 00:25:57.189 | 99.00th=[ 510], 99.50th=[ 575], 99.90th=[ 592], 99.95th=[ 592], 00:25:57.189 | 99.99th=[ 592] 00:25:57.189 bw ( KiB/s): min= 112, max= 1792, per=3.99%, avg=358.40, stdev=504.16, samples=20 00:25:57.189 iops : min= 28, max= 448, avg=89.60, stdev=126.04, samples=20 00:25:57.189 lat (msec) : 50=56.14%, 100=3.51%, 250=4.39%, 500=34.87%, 750=1.10% 00:25:57.189 cpu : usr=97.77%, sys=1.33%, ctx=82, majf=0, minf=9 00:25:57.189 IO depths : 1=5.6%, 2=11.7%, 4=24.7%, 8=51.1%, 16=6.9%, 32=0.0%, >=64=0.0% 00:25:57.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.189 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.189 issued rwts: total=912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.189 filename1: (groupid=0, jobs=1): err= 0: pid=2655286: Wed Jul 24 19:23:01 2024 00:25:57.189 read: IOPS=108, BW=433KiB/s (443kB/s)(4352KiB/10060msec) 00:25:57.189 slat (nsec): min=8692, max=40448, avg=12816.97, stdev=5444.57 00:25:57.189 clat (msec): min=20, max=424, avg=147.82, stdev=117.61 00:25:57.189 lat (msec): min=20, max=424, avg=147.83, stdev=117.61 00:25:57.189 clat percentiles (msec): 00:25:57.189 | 1.00th=[ 21], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:25:57.189 | 30.00th=[ 37], 40.00th=[ 39], 50.00th=[ 91], 60.00th=[ 194], 00:25:57.189 | 70.00th=[ 251], 80.00th=[ 292], 90.00th=[ 305], 95.00th=[ 317], 00:25:57.189 | 99.00th=[ 355], 99.50th=[ 355], 99.90th=[ 426], 99.95th=[ 426], 00:25:57.189 | 99.99th=[ 426] 00:25:57.189 bw ( KiB/s): min= 144, max= 1792, per=4.76%, avg=428.80, stdev=483.99, samples=20 00:25:57.189 iops : min= 36, max= 448, avg=107.20, stdev=121.00, samples=20 00:25:57.189 lat (msec) : 50=46.14%, 100=4.04%, 250=19.30%, 500=30.51% 00:25:57.189 cpu : usr=98.57%, sys=1.05%, ctx=17, majf=0, minf=9 00:25:57.189 IO depths : 1=2.8%, 2=9.0%, 4=25.0%, 8=53.5%, 16=9.7%, 32=0.0%, >=64=0.0% 00:25:57.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.189 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.189 issued rwts: total=1088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.189 filename1: (groupid=0, jobs=1): err= 0: pid=2655287: Wed Jul 24 19:23:01 2024 00:25:57.189 read: IOPS=87, BW=351KiB/s (359kB/s)(3520KiB/10041msec) 00:25:57.189 slat (usec): min=11, max=163, avg=100.69, stdev=25.16 00:25:57.189 clat (msec): min=29, max=796, avg=181.66, stdev=192.79 00:25:57.189 lat (msec): min=29, max=796, avg=181.76, stdev=192.80 00:25:57.189 clat percentiles (msec): 00:25:57.189 | 1.00th=[ 34], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 36], 00:25:57.189 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 53], 00:25:57.189 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 414], 95.00th=[ 447], 00:25:57.189 | 99.00th=[ 793], 99.50th=[ 793], 99.90th=[ 793], 99.95th=[ 793], 00:25:57.189 | 99.99th=[ 793] 00:25:57.189 bw ( KiB/s): min= 127, max= 1792, per=3.24%, avg=291.50, stdev=415.57, samples=18 00:25:57.189 iops : min= 31, max= 448, avg=72.83, stdev=103.91, samples=18 00:25:57.189 lat (msec) : 50=58.41%, 100=3.41%, 500=35.68%, 750=0.68%, 1000=1.82% 00:25:57.189 cpu : usr=97.79%, sys=1.39%, ctx=113, majf=0, minf=9 00:25:57.189 IO depths : 1=5.1%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:25:57.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.189 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.189 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.189 filename1: (groupid=0, jobs=1): err= 0: pid=2655288: Wed Jul 24 19:23:01 2024 00:25:57.189 read: IOPS=87, BW=350KiB/s (358kB/s)(3512KiB/10046msec) 00:25:57.189 slat (usec): min=10, max=120, avg=33.91, stdev=15.66 00:25:57.189 clat (msec): min=25, max=800, avg=182.73, stdev=192.01 00:25:57.189 lat (msec): min=26, max=800, avg=182.77, stdev=192.00 00:25:57.189 clat percentiles (msec): 00:25:57.189 | 1.00th=[ 27], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 36], 00:25:57.189 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 38], 60.00th=[ 63], 00:25:57.189 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 409], 95.00th=[ 426], 00:25:57.189 | 99.00th=[ 802], 99.50th=[ 802], 99.90th=[ 802], 99.95th=[ 802], 00:25:57.189 | 99.99th=[ 802] 00:25:57.189 bw ( KiB/s): min= 128, max= 1792, per=4.04%, avg=363.11, stdev=510.64, samples=19 00:25:57.189 iops : min= 32, max= 448, avg=90.74, stdev=127.56, samples=19 00:25:57.189 lat (msec) : 50=56.95%, 100=4.78%, 500=36.45%, 1000=1.82% 00:25:57.189 cpu : usr=98.42%, sys=1.17%, ctx=20, majf=0, minf=9 00:25:57.189 IO depths : 1=3.8%, 2=10.0%, 4=25.1%, 8=52.5%, 16=8.7%, 32=0.0%, >=64=0.0% 00:25:57.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.189 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.189 issued rwts: total=878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.189 filename1: (groupid=0, jobs=1): err= 0: pid=2655289: Wed Jul 24 19:23:01 2024 00:25:57.189 read: IOPS=87, BW=350KiB/s (359kB/s)(3520KiB/10043msec) 00:25:57.189 slat (usec): min=31, max=145, avg=91.74, stdev=14.04 00:25:57.189 clat (msec): min=26, max=664, avg=181.81, stdev=185.68 00:25:57.189 lat (msec): min=26, max=664, avg=181.90, stdev=185.68 00:25:57.189 clat percentiles (msec): 00:25:57.189 | 1.00th=[ 28], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 36], 00:25:57.189 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 37], 60.00th=[ 72], 00:25:57.189 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 409], 95.00th=[ 439], 00:25:57.189 | 99.00th=[ 667], 99.50th=[ 667], 99.90th=[ 667], 99.95th=[ 667], 00:25:57.189 | 99.99th=[ 667] 00:25:57.189 bw ( KiB/s): min= 128, max= 1792, per=4.04%, avg=363.95, stdev=512.89, samples=19 00:25:57.189 iops : min= 32, max= 448, avg=90.95, stdev=128.12, samples=19 00:25:57.189 lat (msec) : 50=58.18%, 100=2.05%, 250=1.59%, 500=36.14%, 750=2.05% 00:25:57.189 cpu : usr=98.67%, sys=0.93%, ctx=12, majf=0, minf=9 00:25:57.189 IO depths : 1=4.4%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.1%, 32=0.0%, >=64=0.0% 00:25:57.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.189 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.189 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.189 filename1: (groupid=0, jobs=1): err= 0: pid=2655290: Wed Jul 24 19:23:01 2024 00:25:57.189 read: IOPS=87, BW=351KiB/s (359kB/s)(3520KiB/10042msec) 00:25:57.189 slat (usec): min=8, max=140, avg=97.22, stdev=15.91 00:25:57.189 clat (msec): min=26, max=796, avg=181.77, stdev=193.03 00:25:57.189 lat (msec): min=26, max=796, avg=181.87, stdev=193.03 00:25:57.189 clat percentiles (msec): 00:25:57.189 | 1.00th=[ 28], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 36], 00:25:57.189 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 52], 00:25:57.189 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 414], 95.00th=[ 451], 00:25:57.189 | 99.00th=[ 793], 99.50th=[ 793], 99.90th=[ 793], 99.95th=[ 793], 00:25:57.189 | 99.99th=[ 793] 00:25:57.189 bw ( KiB/s): min= 112, max= 1792, per=3.24%, avg=291.50, stdev=415.61, samples=18 00:25:57.189 iops : min= 28, max= 448, avg=72.83, stdev=103.92, samples=18 00:25:57.189 lat (msec) : 50=58.41%, 100=3.41%, 250=0.23%, 500=35.45%, 750=0.68% 00:25:57.189 lat (msec) : 1000=1.82% 00:25:57.189 cpu : usr=98.40%, sys=1.20%, ctx=14, majf=0, minf=9 00:25:57.190 IO depths : 1=4.4%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.1%, 32=0.0%, >=64=0.0% 00:25:57.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.190 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.190 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.190 filename2: (groupid=0, jobs=1): err= 0: pid=2655291: Wed Jul 24 19:23:01 2024 00:25:57.190 read: IOPS=109, BW=439KiB/s (450kB/s)(4408KiB/10038msec) 00:25:57.190 slat (usec): min=4, max=132, avg=74.91, stdev=29.76 00:25:57.190 clat (msec): min=5, max=432, avg=144.95, stdev=127.06 00:25:57.190 lat (msec): min=5, max=432, avg=145.03, stdev=127.06 00:25:57.190 clat percentiles (msec): 00:25:57.190 | 1.00th=[ 6], 5.00th=[ 25], 10.00th=[ 28], 20.00th=[ 35], 00:25:57.190 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 40], 60.00th=[ 199], 00:25:57.190 | 70.00th=[ 249], 80.00th=[ 292], 90.00th=[ 317], 95.00th=[ 372], 00:25:57.190 | 99.00th=[ 388], 99.50th=[ 393], 99.90th=[ 435], 99.95th=[ 435], 00:25:57.190 | 99.99th=[ 435] 00:25:57.190 bw ( KiB/s): min= 144, max= 1792, per=4.83%, avg=434.40, stdev=485.84, samples=20 00:25:57.190 iops : min= 36, max= 448, avg=108.60, stdev=121.46, samples=20 00:25:57.190 lat (msec) : 10=1.45%, 50=49.91%, 100=2.36%, 250=16.52%, 500=29.76% 00:25:57.190 cpu : usr=98.38%, sys=1.23%, ctx=13, majf=0, minf=9 00:25:57.190 IO depths : 1=1.7%, 2=6.2%, 4=19.4%, 8=61.8%, 16=10.9%, 32=0.0%, >=64=0.0% 00:25:57.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.190 complete : 0=0.0%, 4=92.6%, 8=1.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.190 issued rwts: total=1102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.190 filename2: (groupid=0, jobs=1): err= 0: pid=2655292: Wed Jul 24 19:23:01 2024 00:25:57.190 read: IOPS=120, BW=482KiB/s (493kB/s)(4832KiB/10034msec) 00:25:57.190 slat (usec): min=11, max=244, avg=88.58, stdev=29.88 00:25:57.190 clat (msec): min=15, max=392, avg=132.29, stdev=126.04 00:25:57.190 lat (msec): min=15, max=392, avg=132.38, stdev=126.04 00:25:57.190 clat percentiles (msec): 00:25:57.190 | 1.00th=[ 22], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 25], 00:25:57.190 | 30.00th=[ 27], 40.00th=[ 28], 50.00th=[ 32], 60.00th=[ 211], 00:25:57.190 | 70.00th=[ 243], 80.00th=[ 268], 90.00th=[ 309], 95.00th=[ 334], 00:25:57.190 | 99.00th=[ 380], 99.50th=[ 388], 99.90th=[ 393], 99.95th=[ 393], 00:25:57.190 | 99.99th=[ 393] 00:25:57.190 bw ( KiB/s): min= 128, max= 2320, per=5.33%, avg=479.20, stdev=648.65, samples=20 00:25:57.190 iops : min= 32, max= 580, avg=119.80, stdev=162.16, samples=20 00:25:57.190 lat (msec) : 20=0.33%, 50=56.62%, 100=0.83%, 250=17.88%, 500=24.34% 00:25:57.190 cpu : usr=97.49%, sys=1.55%, ctx=104, majf=0, minf=9 00:25:57.190 IO depths : 1=0.2%, 2=0.7%, 4=7.3%, 8=79.1%, 16=12.6%, 32=0.0%, >=64=0.0% 00:25:57.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.190 complete : 0=0.0%, 4=89.1%, 8=5.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.190 issued rwts: total=1208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.190 filename2: (groupid=0, jobs=1): err= 0: pid=2655293: Wed Jul 24 19:23:01 2024 00:25:57.190 read: IOPS=87, BW=350KiB/s (359kB/s)(3520KiB/10046msec) 00:25:57.190 slat (usec): min=8, max=138, avg=42.83, stdev=39.82 00:25:57.190 clat (msec): min=35, max=799, avg=182.21, stdev=185.29 00:25:57.190 lat (msec): min=35, max=799, avg=182.26, stdev=185.33 00:25:57.190 clat percentiles (msec): 00:25:57.190 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:25:57.190 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 75], 00:25:57.190 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 409], 95.00th=[ 439], 00:25:57.190 | 99.00th=[ 667], 99.50th=[ 667], 99.90th=[ 802], 99.95th=[ 802], 00:25:57.190 | 99.99th=[ 802] 00:25:57.190 bw ( KiB/s): min= 128, max= 1792, per=4.04%, avg=363.95, stdev=512.70, samples=19 00:25:57.190 iops : min= 32, max= 448, avg=90.95, stdev=128.07, samples=19 00:25:57.190 lat (msec) : 50=58.18%, 100=1.82%, 250=1.82%, 500=36.36%, 750=1.59% 00:25:57.190 lat (msec) : 1000=0.23% 00:25:57.190 cpu : usr=98.52%, sys=1.09%, ctx=14, majf=0, minf=9 00:25:57.190 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:25:57.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.190 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.190 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.190 filename2: (groupid=0, jobs=1): err= 0: pid=2655294: Wed Jul 24 19:23:01 2024 00:25:57.190 read: IOPS=106, BW=427KiB/s (438kB/s)(4288KiB/10036msec) 00:25:57.190 slat (usec): min=8, max=122, avg=14.32, stdev=15.29 00:25:57.190 clat (msec): min=26, max=444, avg=149.37, stdev=119.15 00:25:57.190 lat (msec): min=26, max=444, avg=149.38, stdev=119.15 00:25:57.190 clat percentiles (msec): 00:25:57.190 | 1.00th=[ 29], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:25:57.190 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 92], 60.00th=[ 199], 00:25:57.190 | 70.00th=[ 262], 80.00th=[ 296], 90.00th=[ 305], 95.00th=[ 317], 00:25:57.190 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 443], 99.95th=[ 443], 00:25:57.190 | 99.99th=[ 443] 00:25:57.190 bw ( KiB/s): min= 128, max= 1792, per=4.70%, avg=422.40, stdev=465.04, samples=20 00:25:57.190 iops : min= 32, max= 448, avg=105.60, stdev=116.26, samples=20 00:25:57.190 lat (msec) : 50=47.76%, 100=2.99%, 250=18.84%, 500=30.41% 00:25:57.190 cpu : usr=98.50%, sys=1.12%, ctx=18, majf=0, minf=9 00:25:57.190 IO depths : 1=2.5%, 2=8.8%, 4=25.0%, 8=53.7%, 16=10.0%, 32=0.0%, >=64=0.0% 00:25:57.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.190 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.190 issued rwts: total=1072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.190 filename2: (groupid=0, jobs=1): err= 0: pid=2655295: Wed Jul 24 19:23:01 2024 00:25:57.190 read: IOPS=86, BW=345KiB/s (354kB/s)(3456KiB/10005msec) 00:25:57.190 slat (nsec): min=17505, max=81906, avg=34228.72, stdev=12185.42 00:25:57.190 clat (msec): min=34, max=811, avg=184.94, stdev=193.39 00:25:57.190 lat (msec): min=34, max=811, avg=184.98, stdev=193.39 00:25:57.190 clat percentiles (msec): 00:25:57.190 | 1.00th=[ 35], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 36], 00:25:57.190 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 63], 00:25:57.190 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 409], 95.00th=[ 426], 00:25:57.190 | 99.00th=[ 810], 99.50th=[ 810], 99.90th=[ 810], 99.95th=[ 810], 00:25:57.190 | 99.99th=[ 810] 00:25:57.190 bw ( KiB/s): min= 128, max= 1664, per=3.16%, avg=284.44, stdev=389.12, samples=18 00:25:57.190 iops : min= 32, max= 416, avg=71.11, stdev=97.28, samples=18 00:25:57.190 lat (msec) : 50=59.26%, 100=1.85%, 500=37.04%, 1000=1.85% 00:25:57.190 cpu : usr=98.37%, sys=1.21%, ctx=78, majf=0, minf=9 00:25:57.190 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:57.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.190 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.190 issued rwts: total=864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.190 filename2: (groupid=0, jobs=1): err= 0: pid=2655296: Wed Jul 24 19:23:01 2024 00:25:57.190 read: IOPS=87, BW=351KiB/s (360kB/s)(3520KiB/10018msec) 00:25:57.190 slat (usec): min=10, max=138, avg=92.53, stdev=16.91 00:25:57.190 clat (msec): min=26, max=520, avg=181.35, stdev=178.01 00:25:57.190 lat (msec): min=26, max=520, avg=181.44, stdev=178.01 00:25:57.190 clat percentiles (msec): 00:25:57.190 | 1.00th=[ 28], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 36], 00:25:57.190 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 38], 60.00th=[ 64], 00:25:57.190 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 409], 95.00th=[ 426], 00:25:57.190 | 99.00th=[ 489], 99.50th=[ 489], 99.90th=[ 523], 99.95th=[ 523], 00:25:57.190 | 99.99th=[ 523] 00:25:57.190 bw ( KiB/s): min= 112, max= 1792, per=3.84%, avg=345.60, stdev=495.07, samples=20 00:25:57.190 iops : min= 28, max= 448, avg=86.40, stdev=123.77, samples=20 00:25:57.190 lat (msec) : 50=58.18%, 100=1.82%, 500=39.77%, 750=0.23% 00:25:57.190 cpu : usr=98.72%, sys=0.86%, ctx=28, majf=0, minf=9 00:25:57.190 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:25:57.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.190 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.190 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.190 filename2: (groupid=0, jobs=1): err= 0: pid=2655297: Wed Jul 24 19:23:01 2024 00:25:57.190 read: IOPS=87, BW=350KiB/s (358kB/s)(3520KiB/10064msec) 00:25:57.190 slat (usec): min=7, max=132, avg=64.91, stdev=41.70 00:25:57.190 clat (msec): min=33, max=682, avg=182.03, stdev=184.00 00:25:57.190 lat (msec): min=33, max=682, avg=182.09, stdev=183.96 00:25:57.190 clat percentiles (msec): 00:25:57.190 | 1.00th=[ 35], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 36], 00:25:57.191 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 71], 00:25:57.191 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 405], 95.00th=[ 439], 00:25:57.191 | 99.00th=[ 684], 99.50th=[ 684], 99.90th=[ 684], 99.95th=[ 684], 00:25:57.191 | 99.99th=[ 684] 00:25:57.191 bw ( KiB/s): min= 128, max= 1776, per=4.04%, avg=363.79, stdev=503.31, samples=19 00:25:57.191 iops : min= 32, max= 444, avg=90.95, stdev=125.83, samples=19 00:25:57.191 lat (msec) : 50=57.95%, 100=2.05%, 250=1.82%, 500=36.36%, 750=1.82% 00:25:57.191 cpu : usr=98.29%, sys=1.33%, ctx=14, majf=0, minf=9 00:25:57.191 IO depths : 1=2.7%, 2=9.0%, 4=25.0%, 8=53.5%, 16=9.8%, 32=0.0%, >=64=0.0% 00:25:57.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.191 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.191 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.191 filename2: (groupid=0, jobs=1): err= 0: pid=2655298: Wed Jul 24 19:23:01 2024 00:25:57.191 read: IOPS=87, BW=350KiB/s (358kB/s)(3520KiB/10067msec) 00:25:57.191 slat (usec): min=11, max=152, avg=92.94, stdev=15.98 00:25:57.191 clat (msec): min=29, max=687, avg=182.25, stdev=186.67 00:25:57.191 lat (msec): min=29, max=687, avg=182.35, stdev=186.66 00:25:57.191 clat percentiles (msec): 00:25:57.191 | 1.00th=[ 35], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 36], 00:25:57.191 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 37], 60.00th=[ 72], 00:25:57.191 | 70.00th=[ 388], 80.00th=[ 393], 90.00th=[ 409], 95.00th=[ 439], 00:25:57.191 | 99.00th=[ 684], 99.50th=[ 684], 99.90th=[ 693], 99.95th=[ 693], 00:25:57.191 | 99.99th=[ 693] 00:25:57.191 bw ( KiB/s): min= 128, max= 1792, per=4.04%, avg=363.79, stdev=524.37, samples=19 00:25:57.191 iops : min= 32, max= 448, avg=90.95, stdev=131.09, samples=19 00:25:57.191 lat (msec) : 50=58.18%, 100=2.05%, 250=1.59%, 500=36.14%, 750=2.05% 00:25:57.191 cpu : usr=98.51%, sys=1.07%, ctx=12, majf=0, minf=9 00:25:57.191 IO depths : 1=4.3%, 2=10.6%, 4=25.0%, 8=51.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:25:57.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.191 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.191 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.191 00:25:57.191 Run status group 0 (all jobs): 00:25:57.191 READ: bw=8982KiB/s (9198kB/s), 345KiB/s-483KiB/s (354kB/s-494kB/s), io=88.3MiB (92.6MB), run=10003-10067msec 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.191 bdev_null0 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.191 [2024-07-24 19:23:01.627389] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.191 bdev_null1 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:57.191 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:57.192 { 00:25:57.192 "params": { 00:25:57.192 "name": "Nvme$subsystem", 00:25:57.192 "trtype": "$TEST_TRANSPORT", 00:25:57.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.192 "adrfam": "ipv4", 00:25:57.192 "trsvcid": "$NVMF_PORT", 00:25:57.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.192 "hdgst": ${hdgst:-false}, 00:25:57.192 "ddgst": ${ddgst:-false} 00:25:57.192 }, 00:25:57.192 "method": "bdev_nvme_attach_controller" 00:25:57.192 } 00:25:57.192 EOF 00:25:57.192 )") 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:57.192 { 00:25:57.192 "params": { 00:25:57.192 "name": "Nvme$subsystem", 00:25:57.192 "trtype": "$TEST_TRANSPORT", 00:25:57.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.192 "adrfam": "ipv4", 00:25:57.192 "trsvcid": "$NVMF_PORT", 00:25:57.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.192 "hdgst": ${hdgst:-false}, 00:25:57.192 "ddgst": ${ddgst:-false} 00:25:57.192 }, 00:25:57.192 "method": "bdev_nvme_attach_controller" 00:25:57.192 } 00:25:57.192 EOF 00:25:57.192 )") 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:57.192 "params": { 00:25:57.192 "name": "Nvme0", 00:25:57.192 "trtype": "tcp", 00:25:57.192 "traddr": "10.0.0.2", 00:25:57.192 "adrfam": "ipv4", 00:25:57.192 "trsvcid": "4420", 00:25:57.192 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:57.192 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:57.192 "hdgst": false, 00:25:57.192 "ddgst": false 00:25:57.192 }, 00:25:57.192 "method": "bdev_nvme_attach_controller" 00:25:57.192 },{ 00:25:57.192 "params": { 00:25:57.192 "name": "Nvme1", 00:25:57.192 "trtype": "tcp", 00:25:57.192 "traddr": "10.0.0.2", 00:25:57.192 "adrfam": "ipv4", 00:25:57.192 "trsvcid": "4420", 00:25:57.192 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:57.192 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:57.192 "hdgst": false, 00:25:57.192 "ddgst": false 00:25:57.192 }, 00:25:57.192 "method": "bdev_nvme_attach_controller" 00:25:57.192 }' 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:57.192 19:23:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:57.192 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:57.192 ... 00:25:57.192 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:57.192 ... 00:25:57.192 fio-3.35 00:25:57.192 Starting 4 threads 00:25:57.192 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.458 00:26:02.458 filename0: (groupid=0, jobs=1): err= 0: pid=2656351: Wed Jul 24 19:23:07 2024 00:26:02.458 read: IOPS=1611, BW=12.6MiB/s (13.2MB/s)(63.0MiB/5004msec) 00:26:02.458 slat (nsec): min=7389, max=68665, avg=12856.05, stdev=6674.51 00:26:02.458 clat (usec): min=1020, max=8860, avg=4922.67, stdev=569.71 00:26:02.458 lat (usec): min=1028, max=8874, avg=4935.53, stdev=569.83 00:26:02.458 clat percentiles (usec): 00:26:02.458 | 1.00th=[ 2933], 5.00th=[ 4146], 10.00th=[ 4424], 20.00th=[ 4686], 00:26:02.458 | 30.00th=[ 4883], 40.00th=[ 4883], 50.00th=[ 4948], 60.00th=[ 5014], 00:26:02.458 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5342], 95.00th=[ 5735], 00:26:02.458 | 99.00th=[ 6718], 99.50th=[ 7701], 99.90th=[ 8160], 99.95th=[ 8717], 00:26:02.458 | 99.99th=[ 8848] 00:26:02.458 bw ( KiB/s): min=12656, max=13184, per=25.28%, avg=12891.20, stdev=163.18, samples=10 00:26:02.458 iops : min= 1582, max= 1648, avg=1611.40, stdev=20.40, samples=10 00:26:02.458 lat (msec) : 2=0.31%, 4=3.47%, 10=96.22% 00:26:02.458 cpu : usr=95.06%, sys=4.44%, ctx=9, majf=0, minf=81 00:26:02.458 IO depths : 1=0.1%, 2=9.5%, 4=61.1%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:02.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.458 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.458 issued rwts: total=8065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.458 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:02.458 filename0: (groupid=0, jobs=1): err= 0: pid=2656352: Wed Jul 24 19:23:07 2024 00:26:02.458 read: IOPS=1565, BW=12.2MiB/s (12.8MB/s)(61.2MiB/5001msec) 00:26:02.458 slat (usec): min=4, max=154, avg=18.79, stdev= 9.02 00:26:02.458 clat (usec): min=887, max=9285, avg=5038.03, stdev=740.43 00:26:02.458 lat (usec): min=908, max=9302, avg=5056.82, stdev=740.24 00:26:02.458 clat percentiles (usec): 00:26:02.458 | 1.00th=[ 2900], 5.00th=[ 4178], 10.00th=[ 4490], 20.00th=[ 4817], 00:26:02.458 | 30.00th=[ 4883], 40.00th=[ 4883], 50.00th=[ 4948], 60.00th=[ 4948], 00:26:02.458 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5800], 95.00th=[ 6390], 00:26:02.458 | 99.00th=[ 7898], 99.50th=[ 8291], 99.90th=[ 9110], 99.95th=[ 9241], 00:26:02.458 | 99.99th=[ 9241] 00:26:02.458 bw ( KiB/s): min=11840, max=12912, per=24.55%, avg=12522.50, stdev=298.95, samples=10 00:26:02.458 iops : min= 1480, max= 1614, avg=1565.30, stdev=37.37, samples=10 00:26:02.458 lat (usec) : 1000=0.04% 00:26:02.458 lat (msec) : 2=0.36%, 4=3.07%, 10=96.54% 00:26:02.458 cpu : usr=87.56%, sys=7.28%, ctx=304, majf=0, minf=45 00:26:02.458 IO depths : 1=0.6%, 2=19.0%, 4=54.4%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:02.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.458 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.458 issued rwts: total=7828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.458 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:02.458 filename1: (groupid=0, jobs=1): err= 0: pid=2656353: Wed Jul 24 19:23:07 2024 00:26:02.458 read: IOPS=1595, BW=12.5MiB/s (13.1MB/s)(62.3MiB/5002msec) 00:26:02.458 slat (nsec): min=5217, max=89629, avg=13725.86, stdev=7730.41 00:26:02.459 clat (usec): min=1042, max=9358, avg=4964.51, stdev=587.07 00:26:02.459 lat (usec): min=1058, max=9378, avg=4978.24, stdev=587.32 00:26:02.459 clat percentiles (usec): 00:26:02.459 | 1.00th=[ 3392], 5.00th=[ 4228], 10.00th=[ 4490], 20.00th=[ 4752], 00:26:02.459 | 30.00th=[ 4883], 40.00th=[ 4883], 50.00th=[ 4948], 60.00th=[ 4948], 00:26:02.459 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5342], 95.00th=[ 5997], 00:26:02.459 | 99.00th=[ 7308], 99.50th=[ 7898], 99.90th=[ 8979], 99.95th=[ 9110], 00:26:02.459 | 99.99th=[ 9372] 00:26:02.459 bw ( KiB/s): min=12128, max=13184, per=25.03%, avg=12764.80, stdev=310.23, samples=10 00:26:02.459 iops : min= 1516, max= 1648, avg=1595.60, stdev=38.78, samples=10 00:26:02.459 lat (msec) : 2=0.14%, 4=3.27%, 10=96.59% 00:26:02.459 cpu : usr=92.86%, sys=5.30%, ctx=49, majf=0, minf=87 00:26:02.459 IO depths : 1=0.4%, 2=19.2%, 4=54.1%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:02.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.459 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.459 issued rwts: total=7979,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.459 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:02.459 filename1: (groupid=0, jobs=1): err= 0: pid=2656354: Wed Jul 24 19:23:07 2024 00:26:02.459 read: IOPS=1605, BW=12.5MiB/s (13.1MB/s)(62.7MiB/5002msec) 00:26:02.459 slat (nsec): min=5274, max=67843, avg=17860.03, stdev=8123.72 00:26:02.459 clat (usec): min=1116, max=11340, avg=4910.86, stdev=605.22 00:26:02.459 lat (usec): min=1131, max=11360, avg=4928.72, stdev=605.65 00:26:02.459 clat percentiles (usec): 00:26:02.459 | 1.00th=[ 3097], 5.00th=[ 4178], 10.00th=[ 4424], 20.00th=[ 4752], 00:26:02.459 | 30.00th=[ 4817], 40.00th=[ 4883], 50.00th=[ 4883], 60.00th=[ 4948], 00:26:02.459 | 70.00th=[ 5014], 80.00th=[ 5014], 90.00th=[ 5211], 95.00th=[ 5866], 00:26:02.459 | 99.00th=[ 7439], 99.50th=[ 8029], 99.90th=[ 8586], 99.95th=[ 8848], 00:26:02.459 | 99.99th=[11338] 00:26:02.459 bw ( KiB/s): min=12576, max=13296, per=25.18%, avg=12844.80, stdev=242.75, samples=10 00:26:02.459 iops : min= 1572, max= 1662, avg=1605.60, stdev=30.34, samples=10 00:26:02.459 lat (msec) : 2=0.31%, 4=3.28%, 10=96.40%, 20=0.01% 00:26:02.459 cpu : usr=95.82%, sys=3.40%, ctx=85, majf=0, minf=98 00:26:02.459 IO depths : 1=0.8%, 2=21.6%, 4=52.4%, 8=25.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:02.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.459 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.459 issued rwts: total=8029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.459 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:02.459 00:26:02.459 Run status group 0 (all jobs): 00:26:02.459 READ: bw=49.8MiB/s (52.2MB/s), 12.2MiB/s-12.6MiB/s (12.8MB/s-13.2MB/s), io=249MiB (261MB), run=5001-5004msec 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.459 00:26:02.459 real 0m24.104s 00:26:02.459 user 4m33.270s 00:26:02.459 sys 0m5.616s 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:02.459 19:23:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:02.459 ************************************ 00:26:02.459 END TEST fio_dif_rand_params 00:26:02.459 ************************************ 00:26:02.459 19:23:07 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:02.459 19:23:07 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:02.459 19:23:07 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:02.459 19:23:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:02.459 ************************************ 00:26:02.459 START TEST fio_dif_digest 00:26:02.459 ************************************ 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:02.459 bdev_null0 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.459 19:23:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:02.459 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.459 19:23:08 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:02.459 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.459 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:02.459 [2024-07-24 19:23:08.009002] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:02.459 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.459 19:23:08 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:02.459 19:23:08 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:02.459 19:23:08 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:02.459 19:23:08 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:02.459 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:02.459 19:23:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:02.459 19:23:08 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:02.459 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:02.459 19:23:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:02.459 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:02.459 19:23:08 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:02.459 19:23:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:02.459 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:02.459 19:23:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:02.459 { 00:26:02.459 "params": { 00:26:02.459 "name": "Nvme$subsystem", 00:26:02.459 "trtype": "$TEST_TRANSPORT", 00:26:02.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:02.460 "adrfam": "ipv4", 00:26:02.460 "trsvcid": "$NVMF_PORT", 00:26:02.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:02.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:02.460 "hdgst": ${hdgst:-false}, 00:26:02.460 "ddgst": ${ddgst:-false} 00:26:02.460 }, 00:26:02.460 "method": "bdev_nvme_attach_controller" 00:26:02.460 } 00:26:02.460 EOF 00:26:02.460 )") 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:02.460 "params": { 00:26:02.460 "name": "Nvme0", 00:26:02.460 "trtype": "tcp", 00:26:02.460 "traddr": "10.0.0.2", 00:26:02.460 "adrfam": "ipv4", 00:26:02.460 "trsvcid": "4420", 00:26:02.460 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:02.460 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:02.460 "hdgst": true, 00:26:02.460 "ddgst": true 00:26:02.460 }, 00:26:02.460 "method": "bdev_nvme_attach_controller" 00:26:02.460 }' 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:02.460 19:23:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:02.460 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:02.460 ... 00:26:02.460 fio-3.35 00:26:02.460 Starting 3 threads 00:26:02.460 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.655 00:26:14.655 filename0: (groupid=0, jobs=1): err= 0: pid=2657016: Wed Jul 24 19:23:18 2024 00:26:14.655 read: IOPS=187, BW=23.5MiB/s (24.6MB/s)(236MiB/10048msec) 00:26:14.655 slat (nsec): min=5363, max=64830, avg=18456.40, stdev=3573.81 00:26:14.655 clat (usec): min=13059, max=53264, avg=15928.11, stdev=1472.80 00:26:14.655 lat (usec): min=13072, max=53282, avg=15946.57, stdev=1472.71 00:26:14.655 clat percentiles (usec): 00:26:14.655 | 1.00th=[13698], 5.00th=[14484], 10.00th=[14746], 20.00th=[15139], 00:26:14.655 | 30.00th=[15401], 40.00th=[15664], 50.00th=[15926], 60.00th=[16057], 00:26:14.655 | 70.00th=[16319], 80.00th=[16581], 90.00th=[17171], 95.00th=[17433], 00:26:14.655 | 99.00th=[18482], 99.50th=[19268], 99.90th=[46924], 99.95th=[53216], 00:26:14.655 | 99.99th=[53216] 00:26:14.655 bw ( KiB/s): min=22573, max=24832, per=33.71%, avg=24117.45, stdev=466.34, samples=20 00:26:14.655 iops : min= 176, max= 194, avg=188.40, stdev= 3.70, samples=20 00:26:14.655 lat (msec) : 20=99.74%, 50=0.21%, 100=0.05% 00:26:14.655 cpu : usr=94.63%, sys=4.75%, ctx=81, majf=0, minf=201 00:26:14.655 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:14.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.655 issued rwts: total=1887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.655 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:14.655 filename0: (groupid=0, jobs=1): err= 0: pid=2657017: Wed Jul 24 19:23:18 2024 00:26:14.655 read: IOPS=200, BW=25.0MiB/s (26.2MB/s)(252MiB/10048msec) 00:26:14.655 slat (usec): min=5, max=146, avg=22.12, stdev= 9.36 00:26:14.655 clat (usec): min=11816, max=51583, avg=14934.34, stdev=1438.04 00:26:14.655 lat (usec): min=11832, max=51611, avg=14956.46, stdev=1437.62 00:26:14.655 clat percentiles (usec): 00:26:14.655 | 1.00th=[12780], 5.00th=[13435], 10.00th=[13698], 20.00th=[14091], 00:26:14.655 | 30.00th=[14484], 40.00th=[14746], 50.00th=[14877], 60.00th=[15139], 00:26:14.655 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16057], 95.00th=[16319], 00:26:14.655 | 99.00th=[16909], 99.50th=[17433], 99.90th=[18744], 99.95th=[50070], 00:26:14.655 | 99.99th=[51643] 00:26:14.655 bw ( KiB/s): min=25344, max=26368, per=35.94%, avg=25715.20, stdev=293.36, samples=20 00:26:14.655 iops : min= 198, max= 206, avg=200.90, stdev= 2.29, samples=20 00:26:14.655 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:26:14.655 cpu : usr=89.30%, sys=7.02%, ctx=483, majf=0, minf=129 00:26:14.655 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:14.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.655 issued rwts: total=2012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.655 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:14.655 filename0: (groupid=0, jobs=1): err= 0: pid=2657018: Wed Jul 24 19:23:18 2024 00:26:14.655 read: IOPS=170, BW=21.4MiB/s (22.4MB/s)(215MiB/10048msec) 00:26:14.655 slat (nsec): min=5361, max=37384, avg=18453.61, stdev=4060.59 00:26:14.655 clat (usec): min=14038, max=57003, avg=17506.51, stdev=1653.14 00:26:14.655 lat (usec): min=14054, max=57018, avg=17524.96, stdev=1653.07 00:26:14.655 clat percentiles (usec): 00:26:14.655 | 1.00th=[15008], 5.00th=[15795], 10.00th=[16188], 20.00th=[16581], 00:26:14.655 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:26:14.655 | 70.00th=[17957], 80.00th=[18220], 90.00th=[19006], 95.00th=[19268], 00:26:14.655 | 99.00th=[20579], 99.50th=[20841], 99.90th=[50594], 99.95th=[56886], 00:26:14.655 | 99.99th=[56886] 00:26:14.655 bw ( KiB/s): min=21504, max=22528, per=30.67%, avg=21941.40, stdev=335.71, samples=20 00:26:14.655 iops : min= 168, max= 176, avg=171.40, stdev= 2.60, samples=20 00:26:14.655 lat (msec) : 20=98.19%, 50=1.69%, 100=0.12% 00:26:14.655 cpu : usr=94.72%, sys=4.75%, ctx=53, majf=0, minf=81 00:26:14.655 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:14.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.655 issued rwts: total=1717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.655 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:14.655 00:26:14.655 Run status group 0 (all jobs): 00:26:14.655 READ: bw=69.9MiB/s (73.3MB/s), 21.4MiB/s-25.0MiB/s (22.4MB/s-26.2MB/s), io=702MiB (736MB), run=10048-10048msec 00:26:14.655 19:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:14.655 19:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:14.655 19:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:14.655 19:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:14.655 19:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:14.655 19:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:14.655 19:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.655 19:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:14.655 19:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.655 19:23:19 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:14.655 19:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.655 19:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:14.655 19:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.655 00:26:14.655 real 0m11.114s 00:26:14.655 user 0m28.911s 00:26:14.655 sys 0m1.898s 00:26:14.655 19:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:14.655 19:23:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:14.655 ************************************ 00:26:14.655 END TEST fio_dif_digest 00:26:14.655 ************************************ 00:26:14.655 19:23:19 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:14.655 19:23:19 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:14.655 19:23:19 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:14.655 19:23:19 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:14.655 19:23:19 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:14.655 19:23:19 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:14.655 19:23:19 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:14.655 19:23:19 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:14.655 rmmod nvme_tcp 00:26:14.655 rmmod nvme_fabrics 00:26:14.655 rmmod nvme_keyring 00:26:14.655 19:23:19 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:14.655 19:23:19 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:14.655 19:23:19 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:14.655 19:23:19 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2652263 ']' 00:26:14.655 19:23:19 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2652263 00:26:14.655 19:23:19 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 2652263 ']' 00:26:14.655 19:23:19 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 2652263 00:26:14.655 19:23:19 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:26:14.655 19:23:19 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:14.655 19:23:19 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2652263 00:26:14.655 19:23:19 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:14.655 19:23:19 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:14.655 19:23:19 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2652263' 00:26:14.655 killing process with pid 2652263 00:26:14.655 19:23:19 nvmf_dif -- common/autotest_common.sh@969 -- # kill 2652263 00:26:14.655 19:23:19 nvmf_dif -- common/autotest_common.sh@974 -- # wait 2652263 00:26:14.655 19:23:19 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:14.655 19:23:19 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:14.655 Waiting for block devices as requested 00:26:14.655 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:26:14.655 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:26:14.655 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:26:14.655 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:26:14.914 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:26:14.914 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:26:14.914 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:26:14.914 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:26:15.174 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:26:15.174 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:26:15.174 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:26:15.174 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:26:15.434 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:26:15.434 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:26:15.434 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:26:15.693 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:26:15.693 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:26:15.693 19:23:21 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:15.693 19:23:21 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:15.693 19:23:21 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:15.693 19:23:21 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:15.693 19:23:21 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.693 19:23:21 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:15.693 19:23:21 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.231 19:23:23 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:18.231 00:26:18.231 real 1m5.756s 00:26:18.231 user 6m29.106s 00:26:18.231 sys 0m15.770s 00:26:18.231 19:23:23 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:18.231 19:23:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:18.231 ************************************ 00:26:18.231 END TEST nvmf_dif 00:26:18.231 ************************************ 00:26:18.231 19:23:23 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:18.231 19:23:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:18.231 19:23:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:18.231 19:23:23 -- common/autotest_common.sh@10 -- # set +x 00:26:18.231 ************************************ 00:26:18.231 START TEST nvmf_abort_qd_sizes 00:26:18.231 ************************************ 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:18.231 * Looking for test storage... 00:26:18.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:18.231 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:18.232 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:18.232 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:18.232 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:18.232 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:18.232 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.232 19:23:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:18.232 19:23:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.232 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:18.232 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:18.232 19:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:26:18.232 19:23:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:26:19.608 Found 0000:08:00.0 (0x8086 - 0x159b) 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:19.608 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:26:19.609 Found 0000:08:00.1 (0x8086 - 0x159b) 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:26:19.609 Found net devices under 0000:08:00.0: cvl_0_0 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:26:19.609 Found net devices under 0000:08:00.1: cvl_0_1 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:19.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:19.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:26:19.609 00:26:19.609 --- 10.0.0.2 ping statistics --- 00:26:19.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.609 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:19.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:19.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:26:19.609 00:26:19.609 --- 10.0.0.1 ping statistics --- 00:26:19.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.609 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:19.609 19:23:25 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:20.545 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:26:20.545 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:26:20.545 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:26:20.545 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:26:20.545 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:26:20.545 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:26:20.545 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:26:20.545 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:26:20.545 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:26:20.545 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:26:20.545 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:26:20.545 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:26:20.545 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:26:20.545 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:26:20.545 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:26:20.545 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:26:21.482 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:26:21.482 19:23:27 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:21.482 19:23:27 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:21.482 19:23:27 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:21.482 19:23:27 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:21.482 19:23:27 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:21.482 19:23:27 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:21.482 19:23:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:21.482 19:23:27 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:21.482 19:23:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:21.482 19:23:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:21.482 19:23:27 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2660733 00:26:21.482 19:23:27 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:21.482 19:23:27 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2660733 00:26:21.482 19:23:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 2660733 ']' 00:26:21.482 19:23:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.482 19:23:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:21.482 19:23:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.482 19:23:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:21.482 19:23:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:21.741 [2024-07-24 19:23:27.521112] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:26:21.741 [2024-07-24 19:23:27.521205] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.741 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.741 [2024-07-24 19:23:27.587545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:21.741 [2024-07-24 19:23:27.706218] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.741 [2024-07-24 19:23:27.706273] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.741 [2024-07-24 19:23:27.706289] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:21.741 [2024-07-24 19:23:27.706303] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:21.741 [2024-07-24 19:23:27.706315] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.741 [2024-07-24 19:23:27.706393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.741 [2024-07-24 19:23:27.706447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:21.741 [2024-07-24 19:23:27.706525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.741 [2024-07-24 19:23:27.706521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:84:00.0 ]] 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:84:00.0 ]] 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:84:00.0 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:84:00.0 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:22.000 19:23:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:22.000 ************************************ 00:26:22.000 START TEST spdk_target_abort 00:26:22.000 ************************************ 00:26:22.000 19:23:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:26:22.000 19:23:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:22.000 19:23:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:84:00.0 -b spdk_target 00:26:22.000 19:23:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.000 19:23:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:25.280 spdk_targetn1 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:25.281 [2024-07-24 19:23:30.700419] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:25.281 [2024-07-24 19:23:30.732719] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:25.281 19:23:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:25.281 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.558 Initializing NVMe Controllers 00:26:28.558 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:28.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:28.558 Initialization complete. Launching workers. 00:26:28.558 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9850, failed: 0 00:26:28.558 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1163, failed to submit 8687 00:26:28.558 success 710, unsuccess 453, failed 0 00:26:28.558 19:23:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:28.558 19:23:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:28.558 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.845 Initializing NVMe Controllers 00:26:31.845 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:31.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:31.845 Initialization complete. Launching workers. 00:26:31.845 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8595, failed: 0 00:26:31.845 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1215, failed to submit 7380 00:26:31.845 success 319, unsuccess 896, failed 0 00:26:31.845 19:23:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:31.845 19:23:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:31.845 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.198 Initializing NVMe Controllers 00:26:35.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:35.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:35.198 Initialization complete. Launching workers. 00:26:35.198 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29540, failed: 0 00:26:35.198 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2659, failed to submit 26881 00:26:35.198 success 439, unsuccess 2220, failed 0 00:26:35.198 19:23:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:26:35.198 19:23:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.198 19:23:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:35.198 19:23:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.198 19:23:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:35.198 19:23:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.198 19:23:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.133 19:23:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.133 19:23:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2660733 00:26:36.133 19:23:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 2660733 ']' 00:26:36.133 19:23:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 2660733 00:26:36.133 19:23:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:26:36.133 19:23:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:36.133 19:23:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2660733 00:26:36.133 19:23:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:36.133 19:23:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:36.133 19:23:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2660733' 00:26:36.133 killing process with pid 2660733 00:26:36.133 19:23:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 2660733 00:26:36.133 19:23:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 2660733 00:26:36.133 00:26:36.133 real 0m14.231s 00:26:36.133 user 0m53.975s 00:26:36.133 sys 0m2.377s 00:26:36.133 19:23:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:36.133 19:23:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.133 ************************************ 00:26:36.133 END TEST spdk_target_abort 00:26:36.133 ************************************ 00:26:36.133 19:23:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:26:36.133 19:23:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:36.133 19:23:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:36.133 19:23:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:36.393 ************************************ 00:26:36.393 START TEST kernel_target_abort 00:26:36.393 ************************************ 00:26:36.393 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:26:36.393 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:26:36.393 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:26:36.393 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:36.393 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:36.393 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.393 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.393 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:36.393 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.393 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:36.393 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:36.393 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:36.393 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:36.393 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:36.393 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:36.393 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:36.393 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:36.393 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:36.393 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:26:36.393 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:36.393 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:36.394 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:36.394 19:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:37.328 Waiting for block devices as requested 00:26:37.328 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:26:37.328 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:26:37.585 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:26:37.585 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:26:37.585 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:26:37.585 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:26:37.843 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:26:37.843 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:26:37.843 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:26:37.843 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:26:38.102 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:26:38.102 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:26:38.102 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:26:38.360 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:26:38.360 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:26:38.360 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:26:38.360 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:38.618 No valid GPT data, bailing 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:26:38.618 00:26:38.618 Discovery Log Number of Records 2, Generation counter 2 00:26:38.618 =====Discovery Log Entry 0====== 00:26:38.618 trtype: tcp 00:26:38.618 adrfam: ipv4 00:26:38.618 subtype: current discovery subsystem 00:26:38.618 treq: not specified, sq flow control disable supported 00:26:38.618 portid: 1 00:26:38.618 trsvcid: 4420 00:26:38.618 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:38.618 traddr: 10.0.0.1 00:26:38.618 eflags: none 00:26:38.618 sectype: none 00:26:38.618 =====Discovery Log Entry 1====== 00:26:38.618 trtype: tcp 00:26:38.618 adrfam: ipv4 00:26:38.618 subtype: nvme subsystem 00:26:38.618 treq: not specified, sq flow control disable supported 00:26:38.618 portid: 1 00:26:38.618 trsvcid: 4420 00:26:38.618 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:38.618 traddr: 10.0.0.1 00:26:38.618 eflags: none 00:26:38.618 sectype: none 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:38.618 19:23:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:38.618 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.895 Initializing NVMe Controllers 00:26:41.895 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:41.895 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:41.895 Initialization complete. Launching workers. 00:26:41.895 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41184, failed: 0 00:26:41.895 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 41184, failed to submit 0 00:26:41.895 success 0, unsuccess 41184, failed 0 00:26:41.895 19:23:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:41.895 19:23:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:41.895 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.172 Initializing NVMe Controllers 00:26:45.172 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:45.172 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:45.172 Initialization complete. Launching workers. 00:26:45.172 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 73764, failed: 0 00:26:45.172 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18598, failed to submit 55166 00:26:45.172 success 0, unsuccess 18598, failed 0 00:26:45.172 19:23:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:45.172 19:23:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:45.172 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.452 Initializing NVMe Controllers 00:26:48.452 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:48.452 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:48.452 Initialization complete. Launching workers. 00:26:48.452 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71822, failed: 0 00:26:48.452 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17938, failed to submit 53884 00:26:48.452 success 0, unsuccess 17938, failed 0 00:26:48.453 19:23:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:26:48.453 19:23:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:48.453 19:23:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:26:48.453 19:23:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:48.453 19:23:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:48.453 19:23:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:48.453 19:23:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:48.453 19:23:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:48.453 19:23:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:48.453 19:23:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:49.018 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:26:49.018 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:26:49.018 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:26:49.018 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:26:49.018 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:26:49.018 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:26:49.018 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:26:49.018 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:26:49.018 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:26:49.018 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:26:49.018 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:26:49.018 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:26:49.018 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:26:49.277 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:26:49.277 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:26:49.277 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:26:50.216 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:26:50.216 00:26:50.216 real 0m13.844s 00:26:50.216 user 0m6.243s 00:26:50.216 sys 0m3.015s 00:26:50.216 19:23:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:50.216 19:23:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:50.216 ************************************ 00:26:50.216 END TEST kernel_target_abort 00:26:50.216 ************************************ 00:26:50.216 19:23:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:50.216 19:23:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:26:50.216 19:23:56 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:50.216 19:23:56 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:26:50.216 19:23:56 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:50.216 19:23:56 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:26:50.216 19:23:56 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:50.216 19:23:56 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:50.216 rmmod nvme_tcp 00:26:50.216 rmmod nvme_fabrics 00:26:50.216 rmmod nvme_keyring 00:26:50.216 19:23:56 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:50.216 19:23:56 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:26:50.216 19:23:56 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:26:50.216 19:23:56 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2660733 ']' 00:26:50.216 19:23:56 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2660733 00:26:50.216 19:23:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 2660733 ']' 00:26:50.216 19:23:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 2660733 00:26:50.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2660733) - No such process 00:26:50.216 19:23:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 2660733 is not found' 00:26:50.216 Process with pid 2660733 is not found 00:26:50.216 19:23:56 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:50.216 19:23:56 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:51.150 Waiting for block devices as requested 00:26:51.150 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:26:51.408 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:26:51.408 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:26:51.408 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:26:51.408 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:26:51.666 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:26:51.666 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:26:51.666 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:26:51.666 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:26:51.924 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:26:51.924 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:26:51.924 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:26:52.187 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:26:52.187 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:26:52.187 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:26:52.187 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:26:52.448 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:26:52.448 19:23:58 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:52.448 19:23:58 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:52.448 19:23:58 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:52.448 19:23:58 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:52.448 19:23:58 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.448 19:23:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:52.448 19:23:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.349 19:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:54.349 00:26:54.349 real 0m36.668s 00:26:54.349 user 1m2.052s 00:26:54.349 sys 0m8.345s 00:26:54.349 19:24:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:54.349 19:24:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:54.349 ************************************ 00:26:54.349 END TEST nvmf_abort_qd_sizes 00:26:54.349 ************************************ 00:26:54.608 19:24:00 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:26:54.608 19:24:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:54.608 19:24:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:54.608 19:24:00 -- common/autotest_common.sh@10 -- # set +x 00:26:54.608 ************************************ 00:26:54.608 START TEST keyring_file 00:26:54.608 ************************************ 00:26:54.608 19:24:00 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:26:54.608 * Looking for test storage... 00:26:54.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:26:54.608 19:24:00 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:26:54.608 19:24:00 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:54.608 19:24:00 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:54.608 19:24:00 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:54.608 19:24:00 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:54.608 19:24:00 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.608 19:24:00 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.608 19:24:00 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.608 19:24:00 keyring_file -- paths/export.sh@5 -- # export PATH 00:26:54.608 19:24:00 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@47 -- # : 0 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:54.608 19:24:00 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:26:54.608 19:24:00 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:26:54.608 19:24:00 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:26:54.608 19:24:00 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:26:54.608 19:24:00 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:26:54.608 19:24:00 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:26:54.608 19:24:00 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:54.608 19:24:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:54.608 19:24:00 keyring_file -- keyring/common.sh@17 -- # name=key0 00:26:54.608 19:24:00 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:54.608 19:24:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:54.608 19:24:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:54.608 19:24:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.wlyFH6R9xN 00:26:54.608 19:24:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:26:54.608 19:24:00 keyring_file -- nvmf/common.sh@705 -- # python - 00:26:54.608 19:24:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.wlyFH6R9xN 00:26:54.608 19:24:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.wlyFH6R9xN 00:26:54.608 19:24:00 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.wlyFH6R9xN 00:26:54.608 19:24:00 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:26:54.608 19:24:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:54.608 19:24:00 keyring_file -- keyring/common.sh@17 -- # name=key1 00:26:54.608 19:24:00 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:26:54.608 19:24:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:54.609 19:24:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:54.609 19:24:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.A5Zd90lP4G 00:26:54.609 19:24:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:26:54.609 19:24:00 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:26:54.609 19:24:00 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:26:54.609 19:24:00 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:54.609 19:24:00 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:26:54.609 19:24:00 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:26:54.609 19:24:00 keyring_file -- nvmf/common.sh@705 -- # python - 00:26:54.609 19:24:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.A5Zd90lP4G 00:26:54.609 19:24:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.A5Zd90lP4G 00:26:54.609 19:24:00 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.A5Zd90lP4G 00:26:54.609 19:24:00 keyring_file -- keyring/file.sh@30 -- # tgtpid=2665297 00:26:54.609 19:24:00 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:26:54.609 19:24:00 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2665297 00:26:54.609 19:24:00 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2665297 ']' 00:26:54.609 19:24:00 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.609 19:24:00 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:54.609 19:24:00 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.609 19:24:00 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:54.609 19:24:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:54.867 [2024-07-24 19:24:00.631733] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:26:54.867 [2024-07-24 19:24:00.631835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2665297 ] 00:26:54.867 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.867 [2024-07-24 19:24:00.693807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.867 [2024-07-24 19:24:00.810852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.125 19:24:01 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:55.125 19:24:01 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:26:55.125 19:24:01 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:26:55.125 19:24:01 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.125 19:24:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:55.125 [2024-07-24 19:24:01.044762] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.125 null0 00:26:55.125 [2024-07-24 19:24:01.076800] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:55.125 [2024-07-24 19:24:01.077211] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:55.125 [2024-07-24 19:24:01.084799] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:55.126 19:24:01 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.126 19:24:01 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:55.126 19:24:01 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:26:55.126 19:24:01 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:55.126 19:24:01 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:55.126 19:24:01 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:55.126 19:24:01 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:55.126 19:24:01 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:55.126 19:24:01 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:55.126 19:24:01 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.126 19:24:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:55.126 [2024-07-24 19:24:01.096825] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:26:55.126 request: 00:26:55.126 { 00:26:55.126 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:26:55.126 "secure_channel": false, 00:26:55.126 "listen_address": { 00:26:55.126 "trtype": "tcp", 00:26:55.126 "traddr": "127.0.0.1", 00:26:55.126 "trsvcid": "4420" 00:26:55.126 }, 00:26:55.126 "method": "nvmf_subsystem_add_listener", 00:26:55.126 "req_id": 1 00:26:55.126 } 00:26:55.126 Got JSON-RPC error response 00:26:55.126 response: 00:26:55.126 { 00:26:55.126 "code": -32602, 00:26:55.126 "message": "Invalid parameters" 00:26:55.126 } 00:26:55.126 19:24:01 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:55.126 19:24:01 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:26:55.126 19:24:01 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:55.126 19:24:01 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:55.126 19:24:01 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:55.126 19:24:01 keyring_file -- keyring/file.sh@46 -- # bperfpid=2665324 00:26:55.126 19:24:01 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:26:55.126 19:24:01 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2665324 /var/tmp/bperf.sock 00:26:55.126 19:24:01 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2665324 ']' 00:26:55.126 19:24:01 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:55.126 19:24:01 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:55.126 19:24:01 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:55.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:55.126 19:24:01 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:55.126 19:24:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:55.384 [2024-07-24 19:24:01.149196] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:26:55.384 [2024-07-24 19:24:01.149297] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2665324 ] 00:26:55.384 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.384 [2024-07-24 19:24:01.209580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.384 [2024-07-24 19:24:01.326643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.641 19:24:01 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:55.641 19:24:01 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:26:55.641 19:24:01 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wlyFH6R9xN 00:26:55.641 19:24:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wlyFH6R9xN 00:26:55.899 19:24:01 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.A5Zd90lP4G 00:26:55.899 19:24:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.A5Zd90lP4G 00:26:56.157 19:24:02 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:26:56.157 19:24:02 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:26:56.157 19:24:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:56.157 19:24:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:56.157 19:24:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:56.415 19:24:02 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.wlyFH6R9xN == \/\t\m\p\/\t\m\p\.\w\l\y\F\H\6\R\9\x\N ]] 00:26:56.415 19:24:02 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:26:56.415 19:24:02 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:26:56.415 19:24:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:56.415 19:24:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:56.415 19:24:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:56.673 19:24:02 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.A5Zd90lP4G == \/\t\m\p\/\t\m\p\.\A\5\Z\d\9\0\l\P\4\G ]] 00:26:56.673 19:24:02 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:26:56.673 19:24:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:56.673 19:24:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:56.673 19:24:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:56.673 19:24:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:56.673 19:24:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:56.931 19:24:02 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:26:56.931 19:24:02 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:26:56.931 19:24:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:56.931 19:24:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:56.931 19:24:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:56.931 19:24:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:56.931 19:24:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:57.497 19:24:03 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:26:57.497 19:24:03 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:57.497 19:24:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:57.497 [2024-07-24 19:24:03.506187] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:57.754 nvme0n1 00:26:57.754 19:24:03 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:26:57.754 19:24:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:57.754 19:24:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:57.754 19:24:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:57.754 19:24:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:57.754 19:24:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:58.012 19:24:03 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:26:58.012 19:24:03 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:26:58.012 19:24:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:58.012 19:24:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:58.012 19:24:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:58.012 19:24:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:58.012 19:24:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:58.270 19:24:04 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:26:58.270 19:24:04 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:58.527 Running I/O for 1 seconds... 00:26:59.461 00:26:59.461 Latency(us) 00:26:59.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.461 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:26:59.461 nvme0n1 : 1.01 7528.18 29.41 0.00 0.00 16904.70 9951.76 29515.47 00:26:59.461 =================================================================================================================== 00:26:59.461 Total : 7528.18 29.41 0.00 0.00 16904.70 9951.76 29515.47 00:26:59.461 0 00:26:59.461 19:24:05 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:59.461 19:24:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:59.719 19:24:05 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:26:59.719 19:24:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:59.719 19:24:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:59.719 19:24:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:59.719 19:24:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:59.719 19:24:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:59.977 19:24:05 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:26:59.977 19:24:05 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:26:59.977 19:24:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:59.977 19:24:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:59.977 19:24:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:59.977 19:24:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:59.977 19:24:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:00.236 19:24:06 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:00.236 19:24:06 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:00.236 19:24:06 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:27:00.236 19:24:06 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:00.236 19:24:06 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:27:00.493 19:24:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:00.493 19:24:06 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:27:00.493 19:24:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:00.494 19:24:06 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:00.494 19:24:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:00.751 [2024-07-24 19:24:06.528371] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:00.751 [2024-07-24 19:24:06.529234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af520 (107): Transport endpoint is not connected 00:27:00.751 [2024-07-24 19:24:06.530226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24af520 (9): Bad file descriptor 00:27:00.751 [2024-07-24 19:24:06.531224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:00.751 [2024-07-24 19:24:06.531245] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:00.751 [2024-07-24 19:24:06.531260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:00.751 request: 00:27:00.751 { 00:27:00.751 "name": "nvme0", 00:27:00.751 "trtype": "tcp", 00:27:00.751 "traddr": "127.0.0.1", 00:27:00.751 "adrfam": "ipv4", 00:27:00.751 "trsvcid": "4420", 00:27:00.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:00.751 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:00.751 "prchk_reftag": false, 00:27:00.751 "prchk_guard": false, 00:27:00.751 "hdgst": false, 00:27:00.751 "ddgst": false, 00:27:00.751 "psk": "key1", 00:27:00.751 "method": "bdev_nvme_attach_controller", 00:27:00.751 "req_id": 1 00:27:00.751 } 00:27:00.751 Got JSON-RPC error response 00:27:00.751 response: 00:27:00.751 { 00:27:00.751 "code": -5, 00:27:00.751 "message": "Input/output error" 00:27:00.751 } 00:27:00.751 19:24:06 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:27:00.751 19:24:06 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:00.751 19:24:06 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:00.751 19:24:06 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:00.751 19:24:06 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:00.751 19:24:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:00.751 19:24:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:00.751 19:24:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:00.751 19:24:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:00.751 19:24:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:01.009 19:24:06 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:01.009 19:24:06 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:01.009 19:24:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:01.009 19:24:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:01.009 19:24:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:01.009 19:24:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:01.009 19:24:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:01.268 19:24:07 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:01.268 19:24:07 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:01.268 19:24:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:01.526 19:24:07 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:01.526 19:24:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:01.784 19:24:07 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:01.784 19:24:07 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:01.784 19:24:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:01.784 19:24:07 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:01.784 19:24:07 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.wlyFH6R9xN 00:27:01.784 19:24:07 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.wlyFH6R9xN 00:27:01.784 19:24:07 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:27:01.784 19:24:07 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.wlyFH6R9xN 00:27:01.784 19:24:07 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:27:01.784 19:24:07 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.784 19:24:07 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:27:01.784 19:24:07 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.784 19:24:07 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wlyFH6R9xN 00:27:01.784 19:24:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wlyFH6R9xN 00:27:02.042 [2024-07-24 19:24:08.019107] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.wlyFH6R9xN': 0100660 00:27:02.042 [2024-07-24 19:24:08.019159] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:02.042 request: 00:27:02.042 { 00:27:02.042 "name": "key0", 00:27:02.042 "path": "/tmp/tmp.wlyFH6R9xN", 00:27:02.042 "method": "keyring_file_add_key", 00:27:02.042 "req_id": 1 00:27:02.042 } 00:27:02.042 Got JSON-RPC error response 00:27:02.042 response: 00:27:02.042 { 00:27:02.042 "code": -1, 00:27:02.042 "message": "Operation not permitted" 00:27:02.042 } 00:27:02.042 19:24:08 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:27:02.042 19:24:08 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:02.042 19:24:08 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:02.042 19:24:08 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:02.042 19:24:08 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.wlyFH6R9xN 00:27:02.042 19:24:08 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wlyFH6R9xN 00:27:02.042 19:24:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wlyFH6R9xN 00:27:02.300 19:24:08 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.wlyFH6R9xN 00:27:02.300 19:24:08 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:02.559 19:24:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:02.559 19:24:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:02.559 19:24:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:02.559 19:24:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:02.559 19:24:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:02.559 19:24:08 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:02.559 19:24:08 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:02.559 19:24:08 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:27:02.559 19:24:08 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:02.559 19:24:08 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:27:02.559 19:24:08 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:02.559 19:24:08 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:27:02.559 19:24:08 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:02.559 19:24:08 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:02.559 19:24:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:02.817 [2024-07-24 19:24:08.789171] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.wlyFH6R9xN': No such file or directory 00:27:02.817 [2024-07-24 19:24:08.789213] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:02.817 [2024-07-24 19:24:08.789247] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:02.817 [2024-07-24 19:24:08.789261] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:02.817 [2024-07-24 19:24:08.789274] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:02.817 request: 00:27:02.817 { 00:27:02.817 "name": "nvme0", 00:27:02.817 "trtype": "tcp", 00:27:02.817 "traddr": "127.0.0.1", 00:27:02.817 "adrfam": "ipv4", 00:27:02.817 "trsvcid": "4420", 00:27:02.817 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:02.817 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:02.817 "prchk_reftag": false, 00:27:02.817 "prchk_guard": false, 00:27:02.817 "hdgst": false, 00:27:02.817 "ddgst": false, 00:27:02.817 "psk": "key0", 00:27:02.817 "method": "bdev_nvme_attach_controller", 00:27:02.817 "req_id": 1 00:27:02.817 } 00:27:02.817 Got JSON-RPC error response 00:27:02.817 response: 00:27:02.817 { 00:27:02.817 "code": -19, 00:27:02.817 "message": "No such device" 00:27:02.817 } 00:27:02.817 19:24:08 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:27:02.817 19:24:08 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:02.817 19:24:08 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:02.817 19:24:08 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:02.817 19:24:08 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:02.817 19:24:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:03.076 19:24:09 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:03.076 19:24:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:03.076 19:24:09 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:03.076 19:24:09 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:03.076 19:24:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:03.076 19:24:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:03.076 19:24:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qTr4pUfsTi 00:27:03.076 19:24:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:03.076 19:24:09 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:03.076 19:24:09 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:03.076 19:24:09 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:03.076 19:24:09 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:03.076 19:24:09 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:03.076 19:24:09 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:03.334 19:24:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qTr4pUfsTi 00:27:03.334 19:24:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qTr4pUfsTi 00:27:03.334 19:24:09 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.qTr4pUfsTi 00:27:03.334 19:24:09 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qTr4pUfsTi 00:27:03.334 19:24:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qTr4pUfsTi 00:27:03.621 19:24:09 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:03.621 19:24:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:03.904 nvme0n1 00:27:03.904 19:24:09 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:03.904 19:24:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:03.904 19:24:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:03.904 19:24:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:03.904 19:24:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:03.904 19:24:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:04.162 19:24:10 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:04.162 19:24:10 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:04.162 19:24:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:04.420 19:24:10 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:04.420 19:24:10 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:04.420 19:24:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:04.420 19:24:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:04.420 19:24:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:04.678 19:24:10 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:04.678 19:24:10 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:04.678 19:24:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:04.678 19:24:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:04.678 19:24:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:04.678 19:24:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:04.678 19:24:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:05.244 19:24:10 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:05.244 19:24:10 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:05.244 19:24:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:05.502 19:24:11 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:05.502 19:24:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:05.502 19:24:11 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:05.760 19:24:11 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:05.760 19:24:11 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qTr4pUfsTi 00:27:05.760 19:24:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qTr4pUfsTi 00:27:06.018 19:24:11 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.A5Zd90lP4G 00:27:06.018 19:24:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.A5Zd90lP4G 00:27:06.276 19:24:12 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:06.276 19:24:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:06.535 nvme0n1 00:27:06.535 19:24:12 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:06.535 19:24:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:06.794 19:24:12 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:06.794 "subsystems": [ 00:27:06.794 { 00:27:06.794 "subsystem": "keyring", 00:27:06.794 "config": [ 00:27:06.794 { 00:27:06.794 "method": "keyring_file_add_key", 00:27:06.794 "params": { 00:27:06.794 "name": "key0", 00:27:06.794 "path": "/tmp/tmp.qTr4pUfsTi" 00:27:06.794 } 00:27:06.794 }, 00:27:06.794 { 00:27:06.795 "method": "keyring_file_add_key", 00:27:06.795 "params": { 00:27:06.795 "name": "key1", 00:27:06.795 "path": "/tmp/tmp.A5Zd90lP4G" 00:27:06.795 } 00:27:06.795 } 00:27:06.795 ] 00:27:06.795 }, 00:27:06.795 { 00:27:06.795 "subsystem": "iobuf", 00:27:06.795 "config": [ 00:27:06.795 { 00:27:06.795 "method": "iobuf_set_options", 00:27:06.795 "params": { 00:27:06.795 "small_pool_count": 8192, 00:27:06.795 "large_pool_count": 1024, 00:27:06.795 "small_bufsize": 8192, 00:27:06.795 "large_bufsize": 135168 00:27:06.795 } 00:27:06.795 } 00:27:06.795 ] 00:27:06.795 }, 00:27:06.795 { 00:27:06.795 "subsystem": "sock", 00:27:06.795 "config": [ 00:27:06.795 { 00:27:06.795 "method": "sock_set_default_impl", 00:27:06.795 "params": { 00:27:06.795 "impl_name": "posix" 00:27:06.795 } 00:27:06.795 }, 00:27:06.795 { 00:27:06.795 "method": "sock_impl_set_options", 00:27:06.795 "params": { 00:27:06.795 "impl_name": "ssl", 00:27:06.795 "recv_buf_size": 4096, 00:27:06.795 "send_buf_size": 4096, 00:27:06.795 "enable_recv_pipe": true, 00:27:06.795 "enable_quickack": false, 00:27:06.795 "enable_placement_id": 0, 00:27:06.795 "enable_zerocopy_send_server": true, 00:27:06.795 "enable_zerocopy_send_client": false, 00:27:06.795 "zerocopy_threshold": 0, 00:27:06.795 "tls_version": 0, 00:27:06.795 "enable_ktls": false 00:27:06.795 } 00:27:06.795 }, 00:27:06.795 { 00:27:06.795 "method": "sock_impl_set_options", 00:27:06.795 "params": { 00:27:06.795 "impl_name": "posix", 00:27:06.795 "recv_buf_size": 2097152, 00:27:06.795 "send_buf_size": 2097152, 00:27:06.795 "enable_recv_pipe": true, 00:27:06.795 "enable_quickack": false, 00:27:06.795 "enable_placement_id": 0, 00:27:06.795 "enable_zerocopy_send_server": true, 00:27:06.795 "enable_zerocopy_send_client": false, 00:27:06.795 "zerocopy_threshold": 0, 00:27:06.795 "tls_version": 0, 00:27:06.795 "enable_ktls": false 00:27:06.795 } 00:27:06.795 } 00:27:06.795 ] 00:27:06.795 }, 00:27:06.795 { 00:27:06.795 "subsystem": "vmd", 00:27:06.795 "config": [] 00:27:06.795 }, 00:27:06.795 { 00:27:06.795 "subsystem": "accel", 00:27:06.795 "config": [ 00:27:06.795 { 00:27:06.795 "method": "accel_set_options", 00:27:06.795 "params": { 00:27:06.795 "small_cache_size": 128, 00:27:06.795 "large_cache_size": 16, 00:27:06.795 "task_count": 2048, 00:27:06.795 "sequence_count": 2048, 00:27:06.795 "buf_count": 2048 00:27:06.795 } 00:27:06.795 } 00:27:06.795 ] 00:27:06.795 }, 00:27:06.795 { 00:27:06.795 "subsystem": "bdev", 00:27:06.795 "config": [ 00:27:06.795 { 00:27:06.795 "method": "bdev_set_options", 00:27:06.795 "params": { 00:27:06.795 "bdev_io_pool_size": 65535, 00:27:06.795 "bdev_io_cache_size": 256, 00:27:06.795 "bdev_auto_examine": true, 00:27:06.795 "iobuf_small_cache_size": 128, 00:27:06.795 "iobuf_large_cache_size": 16 00:27:06.795 } 00:27:06.795 }, 00:27:06.795 { 00:27:06.795 "method": "bdev_raid_set_options", 00:27:06.795 "params": { 00:27:06.795 "process_window_size_kb": 1024, 00:27:06.795 "process_max_bandwidth_mb_sec": 0 00:27:06.795 } 00:27:06.795 }, 00:27:06.795 { 00:27:06.795 "method": "bdev_iscsi_set_options", 00:27:06.795 "params": { 00:27:06.795 "timeout_sec": 30 00:27:06.795 } 00:27:06.795 }, 00:27:06.795 { 00:27:06.795 "method": "bdev_nvme_set_options", 00:27:06.795 "params": { 00:27:06.795 "action_on_timeout": "none", 00:27:06.795 "timeout_us": 0, 00:27:06.795 "timeout_admin_us": 0, 00:27:06.795 "keep_alive_timeout_ms": 10000, 00:27:06.795 "arbitration_burst": 0, 00:27:06.795 "low_priority_weight": 0, 00:27:06.795 "medium_priority_weight": 0, 00:27:06.795 "high_priority_weight": 0, 00:27:06.795 "nvme_adminq_poll_period_us": 10000, 00:27:06.795 "nvme_ioq_poll_period_us": 0, 00:27:06.795 "io_queue_requests": 512, 00:27:06.795 "delay_cmd_submit": true, 00:27:06.795 "transport_retry_count": 4, 00:27:06.795 "bdev_retry_count": 3, 00:27:06.795 "transport_ack_timeout": 0, 00:27:06.795 "ctrlr_loss_timeout_sec": 0, 00:27:06.795 "reconnect_delay_sec": 0, 00:27:06.795 "fast_io_fail_timeout_sec": 0, 00:27:06.795 "disable_auto_failback": false, 00:27:06.795 "generate_uuids": false, 00:27:06.795 "transport_tos": 0, 00:27:06.795 "nvme_error_stat": false, 00:27:06.795 "rdma_srq_size": 0, 00:27:06.795 "io_path_stat": false, 00:27:06.795 "allow_accel_sequence": false, 00:27:06.795 "rdma_max_cq_size": 0, 00:27:06.795 "rdma_cm_event_timeout_ms": 0, 00:27:06.795 "dhchap_digests": [ 00:27:06.795 "sha256", 00:27:06.795 "sha384", 00:27:06.795 "sha512" 00:27:06.795 ], 00:27:06.795 "dhchap_dhgroups": [ 00:27:06.795 "null", 00:27:06.795 "ffdhe2048", 00:27:06.795 "ffdhe3072", 00:27:06.795 "ffdhe4096", 00:27:06.795 "ffdhe6144", 00:27:06.795 "ffdhe8192" 00:27:06.795 ] 00:27:06.795 } 00:27:06.795 }, 00:27:06.795 { 00:27:06.795 "method": "bdev_nvme_attach_controller", 00:27:06.795 "params": { 00:27:06.795 "name": "nvme0", 00:27:06.795 "trtype": "TCP", 00:27:06.795 "adrfam": "IPv4", 00:27:06.795 "traddr": "127.0.0.1", 00:27:06.795 "trsvcid": "4420", 00:27:06.795 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:06.795 "prchk_reftag": false, 00:27:06.795 "prchk_guard": false, 00:27:06.795 "ctrlr_loss_timeout_sec": 0, 00:27:06.795 "reconnect_delay_sec": 0, 00:27:06.795 "fast_io_fail_timeout_sec": 0, 00:27:06.795 "psk": "key0", 00:27:06.795 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:06.795 "hdgst": false, 00:27:06.795 "ddgst": false 00:27:06.795 } 00:27:06.795 }, 00:27:06.795 { 00:27:06.795 "method": "bdev_nvme_set_hotplug", 00:27:06.795 "params": { 00:27:06.795 "period_us": 100000, 00:27:06.795 "enable": false 00:27:06.795 } 00:27:06.795 }, 00:27:06.795 { 00:27:06.795 "method": "bdev_wait_for_examine" 00:27:06.795 } 00:27:06.795 ] 00:27:06.795 }, 00:27:06.795 { 00:27:06.795 "subsystem": "nbd", 00:27:06.795 "config": [] 00:27:06.795 } 00:27:06.795 ] 00:27:06.795 }' 00:27:06.795 19:24:12 keyring_file -- keyring/file.sh@114 -- # killprocess 2665324 00:27:06.795 19:24:12 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2665324 ']' 00:27:06.795 19:24:12 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2665324 00:27:06.795 19:24:12 keyring_file -- common/autotest_common.sh@955 -- # uname 00:27:06.795 19:24:12 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:06.795 19:24:12 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2665324 00:27:06.795 19:24:12 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:06.795 19:24:12 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:06.795 19:24:12 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2665324' 00:27:06.795 killing process with pid 2665324 00:27:06.795 19:24:12 keyring_file -- common/autotest_common.sh@969 -- # kill 2665324 00:27:06.795 Received shutdown signal, test time was about 1.000000 seconds 00:27:06.795 00:27:06.795 Latency(us) 00:27:06.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.795 =================================================================================================================== 00:27:06.795 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:06.795 19:24:12 keyring_file -- common/autotest_common.sh@974 -- # wait 2665324 00:27:07.055 19:24:12 keyring_file -- keyring/file.sh@117 -- # bperfpid=2667144 00:27:07.055 19:24:12 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2667144 /var/tmp/bperf.sock 00:27:07.055 19:24:12 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2667144 ']' 00:27:07.055 19:24:12 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:07.055 19:24:12 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:07.055 19:24:12 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:07.055 19:24:12 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:07.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:07.055 19:24:12 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:07.055 "subsystems": [ 00:27:07.055 { 00:27:07.055 "subsystem": "keyring", 00:27:07.055 "config": [ 00:27:07.055 { 00:27:07.055 "method": "keyring_file_add_key", 00:27:07.055 "params": { 00:27:07.055 "name": "key0", 00:27:07.055 "path": "/tmp/tmp.qTr4pUfsTi" 00:27:07.055 } 00:27:07.055 }, 00:27:07.055 { 00:27:07.055 "method": "keyring_file_add_key", 00:27:07.055 "params": { 00:27:07.055 "name": "key1", 00:27:07.055 "path": "/tmp/tmp.A5Zd90lP4G" 00:27:07.055 } 00:27:07.055 } 00:27:07.055 ] 00:27:07.055 }, 00:27:07.055 { 00:27:07.055 "subsystem": "iobuf", 00:27:07.055 "config": [ 00:27:07.055 { 00:27:07.055 "method": "iobuf_set_options", 00:27:07.055 "params": { 00:27:07.055 "small_pool_count": 8192, 00:27:07.055 "large_pool_count": 1024, 00:27:07.055 "small_bufsize": 8192, 00:27:07.055 "large_bufsize": 135168 00:27:07.055 } 00:27:07.055 } 00:27:07.055 ] 00:27:07.055 }, 00:27:07.055 { 00:27:07.055 "subsystem": "sock", 00:27:07.055 "config": [ 00:27:07.055 { 00:27:07.055 "method": "sock_set_default_impl", 00:27:07.055 "params": { 00:27:07.055 "impl_name": "posix" 00:27:07.055 } 00:27:07.055 }, 00:27:07.055 { 00:27:07.055 "method": "sock_impl_set_options", 00:27:07.055 "params": { 00:27:07.055 "impl_name": "ssl", 00:27:07.055 "recv_buf_size": 4096, 00:27:07.055 "send_buf_size": 4096, 00:27:07.055 "enable_recv_pipe": true, 00:27:07.055 "enable_quickack": false, 00:27:07.055 "enable_placement_id": 0, 00:27:07.055 "enable_zerocopy_send_server": true, 00:27:07.055 "enable_zerocopy_send_client": false, 00:27:07.055 "zerocopy_threshold": 0, 00:27:07.055 "tls_version": 0, 00:27:07.055 "enable_ktls": false 00:27:07.055 } 00:27:07.055 }, 00:27:07.055 { 00:27:07.055 "method": "sock_impl_set_options", 00:27:07.055 "params": { 00:27:07.055 "impl_name": "posix", 00:27:07.055 "recv_buf_size": 2097152, 00:27:07.055 "send_buf_size": 2097152, 00:27:07.055 "enable_recv_pipe": true, 00:27:07.055 "enable_quickack": false, 00:27:07.055 "enable_placement_id": 0, 00:27:07.055 "enable_zerocopy_send_server": true, 00:27:07.055 "enable_zerocopy_send_client": false, 00:27:07.055 "zerocopy_threshold": 0, 00:27:07.055 "tls_version": 0, 00:27:07.055 "enable_ktls": false 00:27:07.055 } 00:27:07.055 } 00:27:07.055 ] 00:27:07.055 }, 00:27:07.055 { 00:27:07.055 "subsystem": "vmd", 00:27:07.055 "config": [] 00:27:07.055 }, 00:27:07.055 { 00:27:07.055 "subsystem": "accel", 00:27:07.055 "config": [ 00:27:07.055 { 00:27:07.055 "method": "accel_set_options", 00:27:07.055 "params": { 00:27:07.055 "small_cache_size": 128, 00:27:07.055 "large_cache_size": 16, 00:27:07.055 "task_count": 2048, 00:27:07.055 "sequence_count": 2048, 00:27:07.055 "buf_count": 2048 00:27:07.055 } 00:27:07.055 } 00:27:07.055 ] 00:27:07.055 }, 00:27:07.055 { 00:27:07.055 "subsystem": "bdev", 00:27:07.055 "config": [ 00:27:07.055 { 00:27:07.055 "method": "bdev_set_options", 00:27:07.055 "params": { 00:27:07.055 "bdev_io_pool_size": 65535, 00:27:07.055 "bdev_io_cache_size": 256, 00:27:07.055 "bdev_auto_examine": true, 00:27:07.055 "iobuf_small_cache_size": 128, 00:27:07.055 "iobuf_large_cache_size": 16 00:27:07.055 } 00:27:07.055 }, 00:27:07.055 { 00:27:07.055 "method": "bdev_raid_set_options", 00:27:07.055 "params": { 00:27:07.055 "process_window_size_kb": 1024, 00:27:07.055 "process_max_bandwidth_mb_sec": 0 00:27:07.055 } 00:27:07.055 }, 00:27:07.055 { 00:27:07.055 "method": "bdev_iscsi_set_options", 00:27:07.055 "params": { 00:27:07.055 "timeout_sec": 30 00:27:07.055 } 00:27:07.055 }, 00:27:07.055 { 00:27:07.055 "method": "bdev_nvme_set_options", 00:27:07.055 "params": { 00:27:07.055 "action_on_timeout": "none", 00:27:07.055 "timeout_us": 0, 00:27:07.055 "timeout_admin_us": 0, 00:27:07.055 "keep_alive_timeout_ms": 10000, 00:27:07.055 "arbitration_burst": 0, 00:27:07.055 "low_priority_weight": 0, 00:27:07.055 "medium_priority_weight": 0, 00:27:07.055 "high_priority_weight": 0, 00:27:07.055 "nvme_adminq_poll_period_us": 10000, 00:27:07.055 "nvme_ioq_poll_period_us": 0, 00:27:07.055 "io_queue_requests": 512, 00:27:07.055 "delay_cmd_submit": true, 00:27:07.055 "transport_retry_count": 4, 00:27:07.055 "bdev_retry_count": 3, 00:27:07.055 "transport_ack_timeout": 0, 00:27:07.055 "ctrlr_loss_timeout_sec": 0, 00:27:07.055 "reconnect_delay_sec": 0, 00:27:07.055 "fast_io_fail_timeout_sec": 0, 00:27:07.055 "disable_auto_failback": false, 00:27:07.055 "generate_uuids": false, 00:27:07.055 "transport_tos": 0, 00:27:07.055 "nvme_error_stat": false, 00:27:07.055 "rdma_srq_size": 0, 00:27:07.055 "io_path_stat": false, 00:27:07.055 "allow_accel_sequence": false, 00:27:07.055 "rdma_max_cq_size": 0, 00:27:07.055 "rdma_cm_event_timeout_ms": 0, 00:27:07.055 "dhchap_digests": [ 00:27:07.055 "sha256", 00:27:07.055 "sha384", 00:27:07.055 "sha512" 00:27:07.055 ], 00:27:07.055 "dhchap_dhgroups": [ 00:27:07.055 "null", 00:27:07.055 "ffdhe2048", 00:27:07.055 "ffdhe3072", 00:27:07.055 "ffdhe4096", 00:27:07.055 "ffdhe6144", 00:27:07.055 "ffdhe8192" 00:27:07.055 ] 00:27:07.055 } 00:27:07.055 }, 00:27:07.055 { 00:27:07.055 "method": "bdev_nvme_attach_controller", 00:27:07.055 "params": { 00:27:07.055 "name": "nvme0", 00:27:07.055 "trtype": "TCP", 00:27:07.055 "adrfam": "IPv4", 00:27:07.055 "traddr": "127.0.0.1", 00:27:07.055 "trsvcid": "4420", 00:27:07.055 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:07.055 "prchk_reftag": false, 00:27:07.055 "prchk_guard": false, 00:27:07.055 "ctrlr_loss_timeout_sec": 0, 00:27:07.055 "reconnect_delay_sec": 0, 00:27:07.055 "fast_io_fail_timeout_sec": 0, 00:27:07.055 "psk": "key0", 00:27:07.055 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:07.055 "hdgst": false, 00:27:07.055 "ddgst": false 00:27:07.055 } 00:27:07.055 }, 00:27:07.056 { 00:27:07.056 "method": "bdev_nvme_set_hotplug", 00:27:07.056 "params": { 00:27:07.056 "period_us": 100000, 00:27:07.056 "enable": false 00:27:07.056 } 00:27:07.056 }, 00:27:07.056 { 00:27:07.056 "method": "bdev_wait_for_examine" 00:27:07.056 } 00:27:07.056 ] 00:27:07.056 }, 00:27:07.056 { 00:27:07.056 "subsystem": "nbd", 00:27:07.056 "config": [] 00:27:07.056 } 00:27:07.056 ] 00:27:07.056 }' 00:27:07.056 19:24:12 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:07.056 19:24:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:07.056 [2024-07-24 19:24:13.017528] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:27:07.056 [2024-07-24 19:24:13.017628] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667144 ] 00:27:07.056 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.314 [2024-07-24 19:24:13.078223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.314 [2024-07-24 19:24:13.198615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.570 [2024-07-24 19:24:13.377382] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:08.137 19:24:14 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:08.137 19:24:14 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:27:08.137 19:24:14 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:08.137 19:24:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:08.137 19:24:14 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:08.395 19:24:14 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:08.395 19:24:14 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:08.395 19:24:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:08.395 19:24:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:08.395 19:24:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:08.395 19:24:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:08.395 19:24:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:08.654 19:24:14 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:08.654 19:24:14 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:08.654 19:24:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:08.654 19:24:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:08.654 19:24:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:08.654 19:24:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:08.654 19:24:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:09.221 19:24:14 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:09.221 19:24:14 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:09.221 19:24:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:09.221 19:24:14 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:09.481 19:24:15 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:09.481 19:24:15 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:09.481 19:24:15 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.qTr4pUfsTi /tmp/tmp.A5Zd90lP4G 00:27:09.481 19:24:15 keyring_file -- keyring/file.sh@20 -- # killprocess 2667144 00:27:09.481 19:24:15 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2667144 ']' 00:27:09.481 19:24:15 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2667144 00:27:09.481 19:24:15 keyring_file -- common/autotest_common.sh@955 -- # uname 00:27:09.481 19:24:15 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:09.481 19:24:15 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2667144 00:27:09.481 19:24:15 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:09.481 19:24:15 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:09.481 19:24:15 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2667144' 00:27:09.481 killing process with pid 2667144 00:27:09.481 19:24:15 keyring_file -- common/autotest_common.sh@969 -- # kill 2667144 00:27:09.481 Received shutdown signal, test time was about 1.000000 seconds 00:27:09.481 00:27:09.481 Latency(us) 00:27:09.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.481 =================================================================================================================== 00:27:09.481 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:09.481 19:24:15 keyring_file -- common/autotest_common.sh@974 -- # wait 2667144 00:27:09.742 19:24:15 keyring_file -- keyring/file.sh@21 -- # killprocess 2665297 00:27:09.742 19:24:15 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2665297 ']' 00:27:09.742 19:24:15 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2665297 00:27:09.742 19:24:15 keyring_file -- common/autotest_common.sh@955 -- # uname 00:27:09.742 19:24:15 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:09.742 19:24:15 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2665297 00:27:09.742 19:24:15 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:09.742 19:24:15 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:09.742 19:24:15 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2665297' 00:27:09.742 killing process with pid 2665297 00:27:09.742 19:24:15 keyring_file -- common/autotest_common.sh@969 -- # kill 2665297 00:27:09.742 [2024-07-24 19:24:15.523518] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:09.742 19:24:15 keyring_file -- common/autotest_common.sh@974 -- # wait 2665297 00:27:10.002 00:27:10.002 real 0m15.452s 00:27:10.002 user 0m39.443s 00:27:10.002 sys 0m3.296s 00:27:10.002 19:24:15 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:10.002 19:24:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:10.002 ************************************ 00:27:10.002 END TEST keyring_file 00:27:10.002 ************************************ 00:27:10.002 19:24:15 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:27:10.002 19:24:15 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:10.002 19:24:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:10.002 19:24:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:10.002 19:24:15 -- common/autotest_common.sh@10 -- # set +x 00:27:10.002 ************************************ 00:27:10.002 START TEST keyring_linux 00:27:10.002 ************************************ 00:27:10.002 19:24:15 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:10.002 * Looking for test storage... 00:27:10.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:10.003 19:24:15 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:10.003 19:24:15 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.003 19:24:15 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.003 19:24:15 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.003 19:24:15 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.003 19:24:15 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.003 19:24:15 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.003 19:24:15 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.003 19:24:15 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:10.003 19:24:15 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:10.003 19:24:15 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:10.003 19:24:15 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:10.003 19:24:15 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:10.003 19:24:15 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:10.003 19:24:15 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:10.003 19:24:15 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:10.003 19:24:15 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:10.003 19:24:15 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:10.003 19:24:15 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:10.003 19:24:15 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:10.003 19:24:15 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:10.003 19:24:15 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:10.003 19:24:15 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:10.003 19:24:15 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:10.263 19:24:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:10.263 19:24:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:10.263 /tmp/:spdk-test:key0 00:27:10.263 19:24:16 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:10.263 19:24:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:10.263 19:24:16 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:10.263 19:24:16 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:10.263 19:24:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:10.263 19:24:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:10.263 19:24:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:10.263 19:24:16 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:10.263 19:24:16 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:10.263 19:24:16 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:10.263 19:24:16 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:10.263 19:24:16 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:10.263 19:24:16 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:10.263 19:24:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:10.263 19:24:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:10.263 /tmp/:spdk-test:key1 00:27:10.263 19:24:16 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2667525 00:27:10.263 19:24:16 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:10.263 19:24:16 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2667525 00:27:10.263 19:24:16 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2667525 ']' 00:27:10.263 19:24:16 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.263 19:24:16 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:10.263 19:24:16 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.263 19:24:16 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:10.263 19:24:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:10.263 [2024-07-24 19:24:16.137733] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:27:10.263 [2024-07-24 19:24:16.137834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667525 ] 00:27:10.263 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.263 [2024-07-24 19:24:16.202015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.523 [2024-07-24 19:24:16.321033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.783 19:24:16 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:10.783 19:24:16 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:27:10.783 19:24:16 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:10.783 19:24:16 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.783 19:24:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:10.783 [2024-07-24 19:24:16.555910] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.783 null0 00:27:10.783 [2024-07-24 19:24:16.587959] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:10.783 [2024-07-24 19:24:16.588387] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:10.783 19:24:16 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.783 19:24:16 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:10.783 1018425856 00:27:10.783 19:24:16 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:10.783 142424391 00:27:10.783 19:24:16 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2667545 00:27:10.783 19:24:16 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2667545 /var/tmp/bperf.sock 00:27:10.783 19:24:16 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:10.783 19:24:16 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2667545 ']' 00:27:10.783 19:24:16 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:10.783 19:24:16 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:10.783 19:24:16 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:10.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:10.783 19:24:16 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:10.783 19:24:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:10.783 [2024-07-24 19:24:16.660301] Starting SPDK v24.09-pre git sha1 ee633e585 / DPDK 24.03.0 initialization... 00:27:10.783 [2024-07-24 19:24:16.660397] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667545 ] 00:27:10.783 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.783 [2024-07-24 19:24:16.720304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.041 [2024-07-24 19:24:16.837273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.041 19:24:16 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:11.041 19:24:16 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:27:11.041 19:24:16 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:11.041 19:24:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:11.300 19:24:17 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:11.300 19:24:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:11.871 19:24:17 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:11.871 19:24:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:11.871 [2024-07-24 19:24:17.863728] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:12.130 nvme0n1 00:27:12.130 19:24:17 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:12.130 19:24:17 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:12.130 19:24:17 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:12.130 19:24:17 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:12.130 19:24:17 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:12.130 19:24:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:12.389 19:24:18 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:12.389 19:24:18 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:12.389 19:24:18 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:12.389 19:24:18 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:12.389 19:24:18 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:12.389 19:24:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:12.389 19:24:18 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:12.648 19:24:18 keyring_linux -- keyring/linux.sh@25 -- # sn=1018425856 00:27:12.648 19:24:18 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:12.648 19:24:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:12.648 19:24:18 keyring_linux -- keyring/linux.sh@26 -- # [[ 1018425856 == \1\0\1\8\4\2\5\8\5\6 ]] 00:27:12.648 19:24:18 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1018425856 00:27:12.648 19:24:18 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:12.648 19:24:18 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:12.907 Running I/O for 1 seconds... 00:27:13.843 00:27:13.843 Latency(us) 00:27:13.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.843 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:13.843 nvme0n1 : 1.01 7806.92 30.50 0.00 0.00 16262.45 4538.97 21554.06 00:27:13.843 =================================================================================================================== 00:27:13.843 Total : 7806.92 30.50 0.00 0.00 16262.45 4538.97 21554.06 00:27:13.843 0 00:27:13.843 19:24:19 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:13.843 19:24:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:14.101 19:24:20 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:27:14.101 19:24:20 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:27:14.101 19:24:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:14.101 19:24:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:14.101 19:24:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:14.101 19:24:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:14.359 19:24:20 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:27:14.359 19:24:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:14.359 19:24:20 keyring_linux -- keyring/linux.sh@23 -- # return 00:27:14.359 19:24:20 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:14.359 19:24:20 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:27:14.359 19:24:20 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:14.359 19:24:20 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:27:14.359 19:24:20 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:14.359 19:24:20 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:27:14.359 19:24:20 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:14.359 19:24:20 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:14.359 19:24:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:14.621 [2024-07-24 19:24:20.626129] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:14.621 [2024-07-24 19:24:20.626266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff6d60 (107): Transport endpoint is not connected 00:27:14.621 [2024-07-24 19:24:20.627258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff6d60 (9): Bad file descriptor 00:27:14.621 [2024-07-24 19:24:20.628257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:14.621 [2024-07-24 19:24:20.628278] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:14.621 [2024-07-24 19:24:20.628293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:14.621 request: 00:27:14.621 { 00:27:14.621 "name": "nvme0", 00:27:14.621 "trtype": "tcp", 00:27:14.621 "traddr": "127.0.0.1", 00:27:14.621 "adrfam": "ipv4", 00:27:14.621 "trsvcid": "4420", 00:27:14.621 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:14.621 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:14.621 "prchk_reftag": false, 00:27:14.621 "prchk_guard": false, 00:27:14.621 "hdgst": false, 00:27:14.621 "ddgst": false, 00:27:14.621 "psk": ":spdk-test:key1", 00:27:14.621 "method": "bdev_nvme_attach_controller", 00:27:14.621 "req_id": 1 00:27:14.621 } 00:27:14.621 Got JSON-RPC error response 00:27:14.621 response: 00:27:14.621 { 00:27:14.621 "code": -5, 00:27:14.621 "message": "Input/output error" 00:27:14.621 } 00:27:14.882 19:24:20 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:27:14.882 19:24:20 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:14.882 19:24:20 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:14.882 19:24:20 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:14.882 19:24:20 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:27:14.882 19:24:20 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:14.882 19:24:20 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:27:14.882 19:24:20 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:27:14.882 19:24:20 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:27:14.882 19:24:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:14.882 19:24:20 keyring_linux -- keyring/linux.sh@33 -- # sn=1018425856 00:27:14.882 19:24:20 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1018425856 00:27:14.882 1 links removed 00:27:14.882 19:24:20 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:14.882 19:24:20 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:27:14.882 19:24:20 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:27:14.882 19:24:20 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:27:14.882 19:24:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:27:14.882 19:24:20 keyring_linux -- keyring/linux.sh@33 -- # sn=142424391 00:27:14.882 19:24:20 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 142424391 00:27:14.882 1 links removed 00:27:14.882 19:24:20 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2667545 00:27:14.882 19:24:20 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2667545 ']' 00:27:14.882 19:24:20 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2667545 00:27:14.882 19:24:20 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:27:14.882 19:24:20 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:14.882 19:24:20 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2667545 00:27:14.882 19:24:20 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:14.882 19:24:20 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:14.882 19:24:20 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2667545' 00:27:14.882 killing process with pid 2667545 00:27:14.882 19:24:20 keyring_linux -- common/autotest_common.sh@969 -- # kill 2667545 00:27:14.882 Received shutdown signal, test time was about 1.000000 seconds 00:27:14.882 00:27:14.882 Latency(us) 00:27:14.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.882 =================================================================================================================== 00:27:14.882 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:14.882 19:24:20 keyring_linux -- common/autotest_common.sh@974 -- # wait 2667545 00:27:15.141 19:24:20 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2667525 00:27:15.141 19:24:20 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2667525 ']' 00:27:15.141 19:24:20 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2667525 00:27:15.141 19:24:20 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:27:15.141 19:24:20 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:15.141 19:24:20 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2667525 00:27:15.141 19:24:20 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:15.141 19:24:20 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:15.141 19:24:20 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2667525' 00:27:15.141 killing process with pid 2667525 00:27:15.141 19:24:20 keyring_linux -- common/autotest_common.sh@969 -- # kill 2667525 00:27:15.141 19:24:20 keyring_linux -- common/autotest_common.sh@974 -- # wait 2667525 00:27:15.400 00:27:15.400 real 0m5.357s 00:27:15.400 user 0m10.802s 00:27:15.400 sys 0m1.630s 00:27:15.400 19:24:21 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:15.400 19:24:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:15.400 ************************************ 00:27:15.400 END TEST keyring_linux 00:27:15.400 ************************************ 00:27:15.400 19:24:21 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:15.400 19:24:21 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:15.400 19:24:21 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:27:15.400 19:24:21 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:27:15.400 19:24:21 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:27:15.400 19:24:21 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:15.400 19:24:21 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:15.400 19:24:21 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:15.400 19:24:21 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:27:15.400 19:24:21 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:15.400 19:24:21 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:27:15.400 19:24:21 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:15.400 19:24:21 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:15.400 19:24:21 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:15.400 19:24:21 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:27:15.400 19:24:21 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:27:15.400 19:24:21 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:27:15.400 19:24:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:15.400 19:24:21 -- common/autotest_common.sh@10 -- # set +x 00:27:15.400 19:24:21 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:27:15.400 19:24:21 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:27:15.400 19:24:21 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:27:15.400 19:24:21 -- common/autotest_common.sh@10 -- # set +x 00:27:16.778 INFO: APP EXITING 00:27:16.778 INFO: killing all VMs 00:27:16.778 INFO: killing vhost app 00:27:16.778 WARN: no vhost pid file found 00:27:16.778 INFO: EXIT DONE 00:27:18.158 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:27:18.158 0000:00:04.7 (8086 3c27): Already using the ioatdma driver 00:27:18.158 0000:00:04.6 (8086 3c26): Already using the ioatdma driver 00:27:18.158 0000:00:04.5 (8086 3c25): Already using the ioatdma driver 00:27:18.158 0000:00:04.4 (8086 3c24): Already using the ioatdma driver 00:27:18.158 0000:00:04.3 (8086 3c23): Already using the ioatdma driver 00:27:18.158 0000:00:04.2 (8086 3c22): Already using the ioatdma driver 00:27:18.158 0000:00:04.1 (8086 3c21): Already using the ioatdma driver 00:27:18.158 0000:00:04.0 (8086 3c20): Already using the ioatdma driver 00:27:18.158 0000:80:04.7 (8086 3c27): Already using the ioatdma driver 00:27:18.158 0000:80:04.6 (8086 3c26): Already using the ioatdma driver 00:27:18.158 0000:80:04.5 (8086 3c25): Already using the ioatdma driver 00:27:18.158 0000:80:04.4 (8086 3c24): Already using the ioatdma driver 00:27:18.158 0000:80:04.3 (8086 3c23): Already using the ioatdma driver 00:27:18.158 0000:80:04.2 (8086 3c22): Already using the ioatdma driver 00:27:18.158 0000:80:04.1 (8086 3c21): Already using the ioatdma driver 00:27:18.158 0000:80:04.0 (8086 3c20): Already using the ioatdma driver 00:27:19.094 Cleaning 00:27:19.094 Removing: /var/run/dpdk/spdk0/config 00:27:19.094 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:19.094 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:19.094 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:19.094 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:19.094 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:19.094 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:19.094 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:19.094 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:19.094 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:19.094 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:19.094 Removing: /var/run/dpdk/spdk1/config 00:27:19.094 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:19.094 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:19.094 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:19.094 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:19.094 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:19.094 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:19.094 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:19.094 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:19.094 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:19.094 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:19.095 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:19.095 Removing: /var/run/dpdk/spdk2/config 00:27:19.095 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:19.095 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:19.095 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:19.095 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:19.095 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:19.095 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:19.095 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:19.095 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:19.095 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:19.095 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:19.095 Removing: /var/run/dpdk/spdk3/config 00:27:19.095 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:19.095 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:19.095 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:19.095 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:19.095 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:19.095 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:19.095 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:19.095 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:19.095 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:19.095 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:19.095 Removing: /var/run/dpdk/spdk4/config 00:27:19.095 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:19.095 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:19.095 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:19.095 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:19.095 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:19.095 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:19.095 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:19.095 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:19.095 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:19.095 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:19.354 Removing: /dev/shm/bdev_svc_trace.1 00:27:19.354 Removing: /dev/shm/nvmf_trace.0 00:27:19.354 Removing: /dev/shm/spdk_tgt_trace.pid2464749 00:27:19.354 Removing: /var/run/dpdk/spdk0 00:27:19.354 Removing: /var/run/dpdk/spdk1 00:27:19.354 Removing: /var/run/dpdk/spdk2 00:27:19.354 Removing: /var/run/dpdk/spdk3 00:27:19.354 Removing: /var/run/dpdk/spdk4 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2463478 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2464095 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2464749 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2465180 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2465660 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2465770 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2466328 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2466422 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2466632 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2467578 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2468298 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2468537 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2468694 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2468864 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2469019 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2469210 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2469361 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2469509 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2469756 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2471772 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2471923 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2472053 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2472067 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2472391 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2472405 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2472736 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2472743 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2472970 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2472975 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2473105 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2473209 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2473514 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2473644 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2473889 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2475523 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2477467 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2483082 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2483703 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2485851 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2486064 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2488017 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2490890 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2492652 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2497517 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2501533 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2502532 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2503047 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2511062 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2512868 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2533045 00:27:19.354 Removing: /var/run/dpdk/spdk_pid2535483 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2538555 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2541555 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2541561 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2542061 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2542556 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2542981 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2543391 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2543416 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2543585 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2543685 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2543691 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2544526 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2545200 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2545698 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2546009 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2546091 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2546208 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2546985 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2547557 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2551653 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2574453 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2576693 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2577589 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2578589 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2578685 00:27:19.355 Removing: /var/run/dpdk/spdk_pid2578710 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2578814 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2579156 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2580154 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2580729 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2581058 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2582402 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2582732 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2583685 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2585549 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2590051 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2592094 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2595085 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2595836 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2596704 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2598716 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2600454 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2603639 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2603641 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2605871 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2605973 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2606083 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2606285 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2606310 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2608422 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2608768 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2610728 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2612261 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2615488 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2618079 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2623183 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2626652 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2626658 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2636732 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2637049 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2637367 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2637762 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2638214 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2638531 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2638845 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2639263 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2641454 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2641909 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2644814 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2644955 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2646219 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2650091 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2650100 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2652353 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2653423 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2654571 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2655144 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2656221 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2656887 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2661046 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2661273 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2661578 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2662804 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2663104 00:27:19.613 Removing: /var/run/dpdk/spdk_pid2663409 00:27:19.614 Removing: /var/run/dpdk/spdk_pid2665297 00:27:19.614 Removing: /var/run/dpdk/spdk_pid2665324 00:27:19.614 Removing: /var/run/dpdk/spdk_pid2667144 00:27:19.614 Removing: /var/run/dpdk/spdk_pid2667525 00:27:19.614 Removing: /var/run/dpdk/spdk_pid2667545 00:27:19.614 Clean 00:27:19.871 19:24:25 -- common/autotest_common.sh@1451 -- # return 0 00:27:19.871 19:24:25 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:27:19.871 19:24:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:19.871 19:24:25 -- common/autotest_common.sh@10 -- # set +x 00:27:19.871 19:24:25 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:27:19.871 19:24:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:19.871 19:24:25 -- common/autotest_common.sh@10 -- # set +x 00:27:19.871 19:24:25 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:19.871 19:24:25 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:27:19.871 19:24:25 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:27:19.871 19:24:25 -- spdk/autotest.sh@395 -- # hash lcov 00:27:19.871 19:24:25 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:19.871 19:24:25 -- spdk/autotest.sh@397 -- # hostname 00:27:19.871 19:24:25 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-02 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:27:20.130 geninfo: WARNING: invalid characters removed from testname! 00:27:52.199 19:24:53 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:53.138 19:24:58 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:56.425 19:25:01 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:59.805 19:25:05 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:03.101 19:25:08 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:06.397 19:25:11 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:08.938 19:25:14 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:08.938 19:25:14 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:08.938 19:25:14 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:08.938 19:25:14 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.938 19:25:14 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.938 19:25:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.938 19:25:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.938 19:25:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.938 19:25:14 -- paths/export.sh@5 -- $ export PATH 00:28:08.938 19:25:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.938 19:25:14 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:08.938 19:25:14 -- common/autobuild_common.sh@447 -- $ date +%s 00:28:08.938 19:25:14 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721841914.XXXXXX 00:28:08.938 19:25:14 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721841914.FNv9Cs 00:28:08.938 19:25:14 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:28:08.938 19:25:14 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:28:08.938 19:25:14 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:08.938 19:25:14 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:08.938 19:25:14 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:08.938 19:25:14 -- common/autobuild_common.sh@463 -- $ get_config_params 00:28:08.938 19:25:14 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:28:08.938 19:25:14 -- common/autotest_common.sh@10 -- $ set +x 00:28:08.938 19:25:14 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:08.939 19:25:14 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:28:08.939 19:25:14 -- pm/common@17 -- $ local monitor 00:28:08.939 19:25:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:08.939 19:25:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:08.939 19:25:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:08.939 19:25:14 -- pm/common@21 -- $ date +%s 00:28:08.939 19:25:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:08.939 19:25:14 -- pm/common@21 -- $ date +%s 00:28:08.939 19:25:14 -- pm/common@25 -- $ sleep 1 00:28:08.939 19:25:14 -- pm/common@21 -- $ date +%s 00:28:08.939 19:25:14 -- pm/common@21 -- $ date +%s 00:28:08.939 19:25:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721841914 00:28:08.939 19:25:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721841914 00:28:08.939 19:25:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721841914 00:28:08.939 19:25:14 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721841914 00:28:08.939 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721841914_collect-vmstat.pm.log 00:28:08.939 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721841914_collect-cpu-load.pm.log 00:28:08.939 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721841914_collect-cpu-temp.pm.log 00:28:08.939 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721841914_collect-bmc-pm.bmc.pm.log 00:28:09.879 19:25:15 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:28:09.879 19:25:15 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j32 00:28:09.879 19:25:15 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:09.879 19:25:15 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:09.879 19:25:15 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:09.879 19:25:15 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:09.879 19:25:15 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:09.879 19:25:15 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:09.879 19:25:15 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:09.879 19:25:15 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:09.879 19:25:15 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:09.879 19:25:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:09.879 19:25:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:09.879 19:25:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:09.879 19:25:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:09.879 19:25:15 -- pm/common@44 -- $ pid=2676165 00:28:09.879 19:25:15 -- pm/common@50 -- $ kill -TERM 2676165 00:28:09.879 19:25:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:09.879 19:25:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:09.879 19:25:15 -- pm/common@44 -- $ pid=2676167 00:28:09.879 19:25:15 -- pm/common@50 -- $ kill -TERM 2676167 00:28:09.879 19:25:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:09.879 19:25:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:09.879 19:25:15 -- pm/common@44 -- $ pid=2676169 00:28:09.879 19:25:15 -- pm/common@50 -- $ kill -TERM 2676169 00:28:09.879 19:25:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:09.879 19:25:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:09.879 19:25:15 -- pm/common@44 -- $ pid=2676199 00:28:09.879 19:25:15 -- pm/common@50 -- $ sudo -E kill -TERM 2676199 00:28:09.879 + [[ -n 2386947 ]] 00:28:09.879 + sudo kill 2386947 00:28:10.150 [Pipeline] } 00:28:10.169 [Pipeline] // stage 00:28:10.174 [Pipeline] } 00:28:10.191 [Pipeline] // timeout 00:28:10.196 [Pipeline] } 00:28:10.213 [Pipeline] // catchError 00:28:10.218 [Pipeline] } 00:28:10.236 [Pipeline] // wrap 00:28:10.242 [Pipeline] } 00:28:10.258 [Pipeline] // catchError 00:28:10.267 [Pipeline] stage 00:28:10.269 [Pipeline] { (Epilogue) 00:28:10.285 [Pipeline] catchError 00:28:10.287 [Pipeline] { 00:28:10.301 [Pipeline] echo 00:28:10.303 Cleanup processes 00:28:10.309 [Pipeline] sh 00:28:10.596 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:10.597 2676318 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:10.597 2676383 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:10.612 [Pipeline] sh 00:28:10.900 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:10.900 ++ grep -v 'sudo pgrep' 00:28:10.900 ++ awk '{print $1}' 00:28:10.900 + sudo kill -9 2676318 00:28:10.913 [Pipeline] sh 00:28:11.198 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:19.320 [Pipeline] sh 00:28:19.605 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:19.605 Artifacts sizes are good 00:28:19.620 [Pipeline] archiveArtifacts 00:28:19.627 Archiving artifacts 00:28:19.847 [Pipeline] sh 00:28:20.158 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:20.173 [Pipeline] cleanWs 00:28:20.182 [WS-CLEANUP] Deleting project workspace... 00:28:20.182 [WS-CLEANUP] Deferred wipeout is used... 00:28:20.188 [WS-CLEANUP] done 00:28:20.190 [Pipeline] } 00:28:20.209 [Pipeline] // catchError 00:28:20.222 [Pipeline] sh 00:28:20.503 + logger -p user.info -t JENKINS-CI 00:28:20.511 [Pipeline] } 00:28:20.527 [Pipeline] // stage 00:28:20.533 [Pipeline] } 00:28:20.549 [Pipeline] // node 00:28:20.554 [Pipeline] End of Pipeline 00:28:20.584 Finished: SUCCESS